Nov 29 07:15:08 crc systemd[1]: Starting Kubernetes Kubelet... Nov 29 07:15:08 crc restorecon[4592]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:08 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 29 07:15:09 crc restorecon[4592]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 29 07:15:09 crc kubenswrapper[4660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:15:09 crc kubenswrapper[4660]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 29 07:15:09 crc kubenswrapper[4660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:15:09 crc kubenswrapper[4660]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:15:09 crc kubenswrapper[4660]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 29 07:15:09 crc kubenswrapper[4660]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.553772 4660 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557145 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557164 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557170 4660 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557175 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557180 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557184 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557189 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557193 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557197 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557202 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557206 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557210 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557215 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557220 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557224 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557228 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557233 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557238 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557242 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557246 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557251 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557255 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557260 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557265 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557269 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557291 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557297 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557302 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557309 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557315 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557320 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557326 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557330 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557335 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557340 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557345 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557349 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557353 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557358 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557365 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557370 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557375 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557379 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557384 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557390 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557396 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557403 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557408 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557413 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557418 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557423 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557427 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557431 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557436 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557440 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557444 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557448 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557453 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557459 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557464 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557469 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557474 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557480 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557485 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557489 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557495 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557502 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557508 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557512 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557517 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.557521 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557649 4660 flags.go:64] FLAG: --address="0.0.0.0" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557662 4660 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557671 4660 flags.go:64] FLAG: --anonymous-auth="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557678 4660 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557686 4660 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557692 4660 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557699 4660 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557706 4660 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557711 4660 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557717 4660 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557723 4660 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557728 4660 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557734 4660 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557739 4660 flags.go:64] FLAG: --cgroup-root="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557744 4660 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557749 4660 flags.go:64] FLAG: --client-ca-file="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557754 4660 flags.go:64] FLAG: --cloud-config="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557759 4660 flags.go:64] FLAG: --cloud-provider="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557764 4660 flags.go:64] FLAG: --cluster-dns="[]" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557773 4660 flags.go:64] FLAG: --cluster-domain="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557778 4660 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557784 4660 flags.go:64] FLAG: --config-dir="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557789 4660 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557795 4660 flags.go:64] FLAG: --container-log-max-files="5" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557803 4660 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557808 4660 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557814 4660 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557819 4660 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557826 4660 flags.go:64] FLAG: --contention-profiling="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557831 4660 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557837 4660 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557843 4660 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557848 4660 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557855 4660 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557860 4660 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557866 4660 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557871 4660 flags.go:64] FLAG: --enable-load-reader="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557876 4660 flags.go:64] FLAG: --enable-server="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557881 4660 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557888 4660 flags.go:64] FLAG: --event-burst="100" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557895 4660 flags.go:64] FLAG: --event-qps="50" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557901 4660 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557906 4660 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557911 4660 flags.go:64] FLAG: --eviction-hard="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557917 4660 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557922 4660 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557928 4660 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557933 4660 flags.go:64] FLAG: --eviction-soft="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557938 4660 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557943 4660 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557948 4660 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557954 4660 flags.go:64] FLAG: --experimental-mounter-path="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557958 4660 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557964 4660 flags.go:64] FLAG: --fail-swap-on="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557969 4660 flags.go:64] FLAG: --feature-gates="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557976 4660 flags.go:64] FLAG: --file-check-frequency="20s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557982 4660 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557988 4660 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557993 4660 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.557998 4660 flags.go:64] FLAG: --healthz-port="10248" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558004 4660 flags.go:64] FLAG: --help="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558009 4660 flags.go:64] FLAG: --hostname-override="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558014 4660 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558020 4660 flags.go:64] FLAG: --http-check-frequency="20s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558026 4660 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558031 4660 flags.go:64] FLAG: --image-credential-provider-config="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558036 4660 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558040 4660 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558046 4660 flags.go:64] FLAG: --image-service-endpoint="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558051 4660 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558056 4660 flags.go:64] FLAG: --kube-api-burst="100" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558061 4660 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558067 4660 flags.go:64] FLAG: --kube-api-qps="50" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558072 4660 flags.go:64] FLAG: --kube-reserved="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558078 4660 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558083 4660 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558089 4660 flags.go:64] FLAG: --kubelet-cgroups="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558094 4660 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558100 4660 flags.go:64] FLAG: --lock-file="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558105 4660 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558110 4660 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558115 4660 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558123 4660 flags.go:64] FLAG: --log-json-split-stream="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558128 4660 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558134 4660 flags.go:64] FLAG: --log-text-split-stream="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558138 4660 flags.go:64] FLAG: --logging-format="text" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558143 4660 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558149 4660 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558154 4660 flags.go:64] FLAG: --manifest-url="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558159 4660 flags.go:64] FLAG: --manifest-url-header="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558166 4660 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558172 4660 flags.go:64] FLAG: --max-open-files="1000000" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558179 4660 flags.go:64] FLAG: --max-pods="110" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558184 4660 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558190 4660 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558194 4660 flags.go:64] FLAG: --memory-manager-policy="None" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558226 4660 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558232 4660 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558238 4660 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558243 4660 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558255 4660 flags.go:64] FLAG: --node-status-max-images="50" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558261 4660 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558266 4660 flags.go:64] FLAG: --oom-score-adj="-999" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558271 4660 flags.go:64] FLAG: --pod-cidr="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558276 4660 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558285 4660 flags.go:64] FLAG: --pod-manifest-path="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558290 4660 flags.go:64] FLAG: --pod-max-pids="-1" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558295 4660 flags.go:64] FLAG: --pods-per-core="0" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558300 4660 flags.go:64] FLAG: --port="10250" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558305 4660 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558310 4660 flags.go:64] FLAG: --provider-id="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558315 4660 flags.go:64] FLAG: --qos-reserved="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558322 4660 flags.go:64] FLAG: --read-only-port="10255" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558327 4660 flags.go:64] FLAG: --register-node="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558332 4660 flags.go:64] FLAG: --register-schedulable="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558337 4660 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558346 4660 flags.go:64] FLAG: --registry-burst="10" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558351 4660 flags.go:64] FLAG: --registry-qps="5" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558357 4660 flags.go:64] FLAG: --reserved-cpus="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558362 4660 flags.go:64] FLAG: --reserved-memory="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558368 4660 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558373 4660 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558379 4660 flags.go:64] FLAG: --rotate-certificates="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558384 4660 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558390 4660 flags.go:64] FLAG: --runonce="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558394 4660 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558400 4660 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558405 4660 flags.go:64] FLAG: --seccomp-default="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558411 4660 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558417 4660 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558422 4660 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558428 4660 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558433 4660 flags.go:64] FLAG: --storage-driver-password="root" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558439 4660 flags.go:64] FLAG: --storage-driver-secure="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558444 4660 flags.go:64] FLAG: --storage-driver-table="stats" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558449 4660 flags.go:64] FLAG: --storage-driver-user="root" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558454 4660 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558459 4660 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558465 4660 flags.go:64] FLAG: --system-cgroups="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558470 4660 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558479 4660 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558483 4660 flags.go:64] FLAG: --tls-cert-file="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558488 4660 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558497 4660 flags.go:64] FLAG: --tls-min-version="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558503 4660 flags.go:64] FLAG: --tls-private-key-file="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558508 4660 flags.go:64] FLAG: --topology-manager-policy="none" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558512 4660 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558517 4660 flags.go:64] FLAG: --topology-manager-scope="container" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558533 4660 flags.go:64] FLAG: --v="2" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558549 4660 flags.go:64] FLAG: --version="false" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558556 4660 flags.go:64] FLAG: --vmodule="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558562 4660 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.558567 4660 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558730 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558740 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558745 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558750 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558755 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558760 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558764 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558773 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558779 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558783 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558788 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558792 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558797 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558802 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558807 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558812 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558816 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558822 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558826 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558831 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558836 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558840 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558846 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558850 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558854 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558858 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558863 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558867 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558871 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558876 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558880 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558886 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558890 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558895 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558899 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558903 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558909 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558914 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558919 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558926 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558931 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558935 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558940 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558945 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558949 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558953 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558958 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558962 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558966 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558970 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558975 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558979 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558983 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558988 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558992 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.558996 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559000 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559005 4660 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559009 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559015 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559022 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559027 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559031 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559035 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559042 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559047 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559053 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559059 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559064 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559069 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.559074 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.559090 4660 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.565783 4660 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.565812 4660 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565883 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565890 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565896 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565901 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565906 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565910 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565915 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565919 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565924 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565928 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565932 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565937 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565941 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565945 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565951 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565959 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565964 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565969 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565973 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565978 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565982 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565988 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565993 4660 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.565999 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566009 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566014 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566018 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566022 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566026 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566031 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566035 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566040 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566044 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566049 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566054 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566060 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566065 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566070 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566075 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566079 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566084 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566090 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566095 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566100 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566106 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566111 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566116 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566121 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566126 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566130 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566134 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566139 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566143 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566148 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566152 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566156 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566160 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566165 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566169 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566173 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566211 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566216 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566221 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566227 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566232 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566237 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566241 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566245 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566249 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566254 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566259 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.566268 4660 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566399 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566414 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566419 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566424 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566429 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566434 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566439 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566443 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566447 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566454 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566461 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566465 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566469 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566473 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566479 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566484 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566489 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566494 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566498 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566503 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566507 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566511 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566516 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566520 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566525 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566529 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566534 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566538 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566543 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566547 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566551 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566556 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566561 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566567 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566575 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566580 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566585 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566589 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566593 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566598 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566604 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566627 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566632 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566637 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566641 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566646 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566651 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566656 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566661 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566665 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566670 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566674 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566679 4660 feature_gate.go:330] unrecognized feature gate: Example Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566683 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566687 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566692 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566697 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566702 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566706 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566710 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566715 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566719 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566724 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566728 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566732 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566738 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566742 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566746 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566751 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566755 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.566761 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.566769 4660 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.567141 4660 server.go:940] "Client rotation is on, will bootstrap in background" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.569729 4660 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.569810 4660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.570367 4660 server.go:997] "Starting client certificate rotation" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.570395 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.570521 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 09:18:43.909180799 +0000 UTC Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.570575 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 170h3m34.338607945s for next certificate rotation Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.577750 4660 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.579775 4660 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.587047 4660 log.go:25] "Validated CRI v1 runtime API" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.603556 4660 log.go:25] "Validated CRI v1 image API" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.605038 4660 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.607467 4660 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-29-07-10-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.607490 4660 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.616466 4660 manager.go:217] Machine: {Timestamp:2025-11-29 07:15:09.615670209 +0000 UTC m=+0.169200118 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e8ec79b4-9420-428e-820e-3d546f24f945 BootID:168d3329-d7ae-441d-bd3b-eaf0cacb1014 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:30:57:b7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:30:57:b7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4c:ed:16 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a8:01:12 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:61:60:a7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c8:96:ac Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:ad:90:c5:c9:92 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7e:3c:17:80:8c:6d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.616630 4660 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.616733 4660 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.617121 4660 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.617348 4660 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.617385 4660 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.617559 4660 topology_manager.go:138] "Creating topology manager with none policy" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.617567 4660 container_manager_linux.go:303] "Creating device plugin manager" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.617781 4660 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.617815 4660 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.618054 4660 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.618127 4660 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.618673 4660 kubelet.go:418] "Attempting to sync node with API server" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.618689 4660 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.618709 4660 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.618720 4660 kubelet.go:324] "Adding apiserver pod source" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.618730 4660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.620224 4660 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.621144 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.621227 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.621282 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.621377 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.621563 4660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623013 4660 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623484 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623508 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623517 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623525 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623537 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623546 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623556 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623575 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623589 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623602 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623648 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623660 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.623687 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.624029 4660 server.go:1280] "Started kubelet" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.624316 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.624322 4660 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.624326 4660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.625419 4660 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 29 07:15:09 crc systemd[1]: Started Kubernetes Kubelet. Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.627577 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.627648 4660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.628033 4660 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.628047 4660 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.628157 4660 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.635681 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:19:11.257556073 +0000 UTC Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.635856 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.635923 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.635589 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.165:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c68ea51a8348c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:15:09.624005772 +0000 UTC m=+0.177535671,LastTimestamp:2025-11-29 07:15:09.624005772 +0000 UTC m=+0.177535671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.635741 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 74h4m1.621818561s for next certificate rotation Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.636060 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="200ms" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.629314 4660 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.636740 4660 server.go:460] "Adding debug handlers to kubelet server" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.639189 4660 factory.go:55] Registering systemd factory Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.639212 4660 factory.go:221] Registration of the systemd container factory successfully Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642223 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642274 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642286 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642299 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642309 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642321 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642334 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642344 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642357 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642368 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642380 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642391 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642403 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642417 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642444 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642455 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642466 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642477 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642489 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642500 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642517 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642528 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642538 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642550 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642562 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642573 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642585 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642597 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642629 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642642 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642654 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642665 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642676 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642687 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642699 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642712 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642724 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642736 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642747 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642759 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642770 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642782 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.642793 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643339 4660 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643361 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643375 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643390 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643401 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643411 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643424 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643435 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643446 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643457 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643472 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643485 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643496 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643510 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643523 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643534 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643544 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643554 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643565 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643577 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643587 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643598 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643629 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643642 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643655 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643667 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643679 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643691 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643702 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643713 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643725 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643737 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643748 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643760 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643772 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643785 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643795 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643806 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643818 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643829 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643840 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643856 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643868 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643879 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643890 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643901 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643912 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643923 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643933 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643943 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643954 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643967 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643977 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643988 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.643999 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644008 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644019 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644030 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644041 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644053 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644064 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644076 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644092 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644104 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644115 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644128 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644140 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644155 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644169 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644182 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644193 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644206 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644218 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644231 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644242 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644255 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644268 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644281 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644291 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644302 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644314 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644324 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644334 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644349 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644360 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644372 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644382 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644392 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644402 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644412 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644426 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644437 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644447 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644458 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644469 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644500 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644511 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644522 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644532 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644544 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644555 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644566 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644576 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644588 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644598 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644630 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644644 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644657 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644667 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644680 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644691 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644705 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644717 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644728 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644741 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644754 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644766 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644778 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644790 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644802 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644814 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644825 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644837 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644849 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644861 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644872 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644886 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644899 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644911 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644923 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644933 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644945 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644957 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644968 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644979 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.644991 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645002 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645013 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645025 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645037 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645066 4660 factory.go:153] Registering CRI-O factory Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645554 4660 factory.go:221] Registration of the crio container factory successfully Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645048 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645642 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645665 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645677 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645689 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645700 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645710 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645719 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645729 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645739 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645749 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645758 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645769 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645778 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645787 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645799 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645809 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645818 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645828 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645838 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645848 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645860 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645871 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645881 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645891 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645900 4660 reconstruct.go:97] "Volume reconstruction finished" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.645908 4660 reconciler.go:26] "Reconciler: start to sync state" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.646591 4660 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.646650 4660 factory.go:103] Registering Raw factory Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.646665 4660 manager.go:1196] Started watching for new ooms in manager Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.647215 4660 manager.go:319] Starting recovery of all containers Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.669099 4660 manager.go:324] Recovery completed Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.677466 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.681811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.682035 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.682259 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.684424 4660 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.684451 4660 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.684473 4660 state_mem.go:36] "Initialized new in-memory state store" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.690760 4660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.692228 4660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.692264 4660 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.692296 4660 kubelet.go:2335] "Starting kubelet main sync loop" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.692338 4660 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 29 07:15:09 crc kubenswrapper[4660]: W1129 07:15:09.692857 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.692909 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.719585 4660 policy_none.go:49] "None policy: Start" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.721164 4660 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.721285 4660 state_mem.go:35] "Initializing new in-memory state store" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.729261 4660 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.792773 4660 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.795850 4660 manager.go:334] "Starting Device Plugin manager" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.795934 4660 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.795960 4660 server.go:79] "Starting device plugin registration server" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.796405 4660 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.796430 4660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.796536 4660 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.796663 4660 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.796686 4660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.802799 4660 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.837090 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="400ms" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.897002 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.898589 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.898635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.898645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.898665 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:15:09 crc kubenswrapper[4660]: E1129 07:15:09.899228 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.165:6443: connect: connection refused" node="crc" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.993573 4660 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.993723 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.994786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.994812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.994824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.994957 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995151 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995191 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995697 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995722 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995732 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995819 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995928 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.995957 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.996506 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.996537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.996555 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.996577 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.996595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.996627 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.996896 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.997162 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.997206 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.997814 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.997842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.997853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.997945 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998084 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998120 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998280 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998314 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998352 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998370 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998380 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998909 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998927 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.998936 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.999051 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.999070 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.999689 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.999711 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:09 crc kubenswrapper[4660]: I1129 07:15:09.999720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.000053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.000082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.000094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054554 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054582 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054598 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054629 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054649 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054679 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054857 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054908 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054957 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.054992 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.055016 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.055047 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.055075 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.055094 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.055114 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.100143 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.101410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.101504 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.101581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.101685 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:15:10 crc kubenswrapper[4660]: E1129 07:15:10.102204 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.165:6443: connect: connection refused" node="crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156177 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156358 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156404 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156553 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156580 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156603 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156645 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156669 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156688 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156690 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156748 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156484 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156707 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156785 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156821 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156822 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156846 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156877 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156887 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156903 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156922 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156925 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156941 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156953 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156967 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156977 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.156990 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.157009 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.157029 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.157121 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: E1129 07:15:10.238566 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="800ms" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.327290 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: W1129 07:15:10.349250 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b8d43a8ee0512c62b6832a75e49b7c190a59cf792aa3ac2c4587f2dc923925c3 WatchSource:0}: Error finding container b8d43a8ee0512c62b6832a75e49b7c190a59cf792aa3ac2c4587f2dc923925c3: Status 404 returned error can't find the container with id b8d43a8ee0512c62b6832a75e49b7c190a59cf792aa3ac2c4587f2dc923925c3 Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.350958 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.357640 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: W1129 07:15:10.369968 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6981bc234e5560da355cef592a82b58a04d21924818882309ca49071abeb57ea WatchSource:0}: Error finding container 6981bc234e5560da355cef592a82b58a04d21924818882309ca49071abeb57ea: Status 404 returned error can't find the container with id 6981bc234e5560da355cef592a82b58a04d21924818882309ca49071abeb57ea Nov 29 07:15:10 crc kubenswrapper[4660]: W1129 07:15:10.370437 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8cdbfcb408a19e1698fc9cbc03244db7808c5dc91111679b1164446dd9eb3c4c WatchSource:0}: Error finding container 8cdbfcb408a19e1698fc9cbc03244db7808c5dc91111679b1164446dd9eb3c4c: Status 404 returned error can't find the container with id 8cdbfcb408a19e1698fc9cbc03244db7808c5dc91111679b1164446dd9eb3c4c Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.372690 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.377131 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 29 07:15:10 crc kubenswrapper[4660]: W1129 07:15:10.388030 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0e6f375f436919ef95d74223543800a62e04c25e8dfc9bba5831bc9345e9bd44 WatchSource:0}: Error finding container 0e6f375f436919ef95d74223543800a62e04c25e8dfc9bba5831bc9345e9bd44: Status 404 returned error can't find the container with id 0e6f375f436919ef95d74223543800a62e04c25e8dfc9bba5831bc9345e9bd44 Nov 29 07:15:10 crc kubenswrapper[4660]: W1129 07:15:10.396545 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-70450525a769a8e8575a0d5d3d7c2effe89b24a24ccdc7e5faae0dc56afd138d WatchSource:0}: Error finding container 70450525a769a8e8575a0d5d3d7c2effe89b24a24ccdc7e5faae0dc56afd138d: Status 404 returned error can't find the container with id 70450525a769a8e8575a0d5d3d7c2effe89b24a24ccdc7e5faae0dc56afd138d Nov 29 07:15:10 crc kubenswrapper[4660]: W1129 07:15:10.479447 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:10 crc kubenswrapper[4660]: E1129 07:15:10.479540 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.502985 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.504204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.504244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.504253 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.504278 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:15:10 crc kubenswrapper[4660]: E1129 07:15:10.504682 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.165:6443: connect: connection refused" node="crc" Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.627932 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.697396 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8cdbfcb408a19e1698fc9cbc03244db7808c5dc91111679b1164446dd9eb3c4c"} Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.698991 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6981bc234e5560da355cef592a82b58a04d21924818882309ca49071abeb57ea"} Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.700109 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8d43a8ee0512c62b6832a75e49b7c190a59cf792aa3ac2c4587f2dc923925c3"} Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.701046 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"70450525a769a8e8575a0d5d3d7c2effe89b24a24ccdc7e5faae0dc56afd138d"} Nov 29 07:15:10 crc kubenswrapper[4660]: I1129 07:15:10.702028 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0e6f375f436919ef95d74223543800a62e04c25e8dfc9bba5831bc9345e9bd44"} Nov 29 07:15:10 crc kubenswrapper[4660]: W1129 07:15:10.817724 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:10 crc kubenswrapper[4660]: E1129 07:15:10.818077 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:11 crc kubenswrapper[4660]: W1129 07:15:11.019281 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:11 crc kubenswrapper[4660]: E1129 07:15:11.019364 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:11 crc kubenswrapper[4660]: E1129 07:15:11.039670 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="1.6s" Nov 29 07:15:11 crc kubenswrapper[4660]: W1129 07:15:11.210282 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:11 crc kubenswrapper[4660]: E1129 07:15:11.210356 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:11 crc kubenswrapper[4660]: I1129 07:15:11.305421 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:11 crc kubenswrapper[4660]: I1129 07:15:11.311212 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:11 crc kubenswrapper[4660]: I1129 07:15:11.311246 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:11 crc kubenswrapper[4660]: I1129 07:15:11.311256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:11 crc kubenswrapper[4660]: I1129 07:15:11.311278 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:15:11 crc kubenswrapper[4660]: E1129 07:15:11.311768 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.165:6443: connect: connection refused" node="crc" Nov 29 07:15:11 crc kubenswrapper[4660]: I1129 07:15:11.628139 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:12 crc kubenswrapper[4660]: E1129 07:15:12.064102 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.165:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c68ea51a8348c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:15:09.624005772 +0000 UTC m=+0.177535671,LastTimestamp:2025-11-29 07:15:09.624005772 +0000 UTC m=+0.177535671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.628007 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:12 crc kubenswrapper[4660]: E1129 07:15:12.640585 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="3.2s" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.713577 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499"} Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.713639 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f"} Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.713652 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8"} Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.714636 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843" exitCode=0 Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.714696 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843"} Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.714811 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.716092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.716120 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.716129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.717252 4660 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b" exitCode=0 Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.717320 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b"} Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.717401 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.717408 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.718204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.718226 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.718235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.718255 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.718283 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.718294 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.720467 4660 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146" exitCode=0 Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.720515 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146"} Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.720571 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.721344 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.721388 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.721399 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.722270 4660 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac" exitCode=0 Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.722292 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac"} Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.722396 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.723004 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.723067 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.723078 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.911910 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.913092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.913130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.913139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:12 crc kubenswrapper[4660]: I1129 07:15:12.913162 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:15:12 crc kubenswrapper[4660]: E1129 07:15:12.913548 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.165:6443: connect: connection refused" node="crc" Nov 29 07:15:13 crc kubenswrapper[4660]: W1129 07:15:13.113806 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:13 crc kubenswrapper[4660]: E1129 07:15:13.113893 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:13 crc kubenswrapper[4660]: W1129 07:15:13.147429 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:13 crc kubenswrapper[4660]: E1129 07:15:13.147507 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:13 crc kubenswrapper[4660]: W1129 07:15:13.426021 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.165:6443: connect: connection refused Nov 29 07:15:13 crc kubenswrapper[4660]: E1129 07:15:13.426104 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.165:6443: connect: connection refused" logger="UnhandledError" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.731631 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d"} Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.731679 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7"} Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.734893 4660 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3" exitCode=0 Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.734966 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3"} Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.734988 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.735978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.736008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.736020 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.738138 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b"} Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.738201 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.738773 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.738803 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.738815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.740418 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8"} Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.740450 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8"} Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.742084 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed"} Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.742155 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.742983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.743011 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:13 crc kubenswrapper[4660]: I1129 07:15:13.743022 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.748226 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949"} Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.748321 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.749958 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.749986 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.749995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.754806 4660 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8" exitCode=0 Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.754862 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8"} Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.754984 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.756320 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.756376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.756398 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.761339 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828"} Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.761396 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb"} Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.761424 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.761436 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.761440 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5"} Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.761585 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.762790 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.762829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.762845 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.763680 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.763705 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.763720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.765953 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.766190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:14 crc kubenswrapper[4660]: I1129 07:15:14.766390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766002 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766049 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766424 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a47d3f116580df6a2a6b9322cb2a081b2b1a4feb63454e859b0e3f5145f8b7ac"} Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766465 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"36e1b0d3c72a569c203641619285fe61ba7274e3fa33c4fc6662fc99c35cf551"} Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766480 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1a524d037d1390427673fa9698643411c3902595e04e84a84603afc5bbf79d15"} Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766490 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e82e441855edd7e07e285e91535af7db0b9995acf6e286ee4ba991fbde7af4bc"} Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766579 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.766846 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.767102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.767122 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.767130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.767713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.767730 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:15 crc kubenswrapper[4660]: I1129 07:15:15.767738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.078586 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.113935 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.115667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.115714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.115727 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.115754 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.770935 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7fdbb82f4863a742b1c19fe5f3ac11f0712f113716e0e70dc29abc0aef258417"} Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.771014 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.771069 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.771118 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772154 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772251 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772284 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772937 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:16 crc kubenswrapper[4660]: I1129 07:15:16.772972 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.422896 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.423114 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.424489 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.424534 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.424551 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.773105 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.774142 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.774174 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.774185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.850559 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.850809 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.851970 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.852009 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.852019 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.858549 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:17 crc kubenswrapper[4660]: I1129 07:15:17.985805 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.295675 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.295844 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.296876 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.296926 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.296936 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.774711 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.775726 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.775770 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:18 crc kubenswrapper[4660]: I1129 07:15:18.775783 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.710263 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.710557 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.712216 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.712252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.712261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.776971 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.777866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.777911 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:19 crc kubenswrapper[4660]: I1129 07:15:19.777922 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:19 crc kubenswrapper[4660]: E1129 07:15:19.802924 4660 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 29 07:15:20 crc kubenswrapper[4660]: I1129 07:15:20.986885 4660 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:15:20 crc kubenswrapper[4660]: I1129 07:15:20.987294 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:15:21 crc kubenswrapper[4660]: I1129 07:15:21.430751 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 29 07:15:21 crc kubenswrapper[4660]: I1129 07:15:21.430973 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:21 crc kubenswrapper[4660]: I1129 07:15:21.432582 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:21 crc kubenswrapper[4660]: I1129 07:15:21.432682 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:21 crc kubenswrapper[4660]: I1129 07:15:21.432706 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.239172 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.240355 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.242998 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.244378 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.244461 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.248749 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.629305 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:15:23 crc kubenswrapper[4660]: W1129 07:15:23.932391 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 29 07:15:23 crc kubenswrapper[4660]: I1129 07:15:23.932494 4660 trace.go:236] Trace[553468996]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:15:13.927) (total time: 10004ms): Nov 29 07:15:23 crc kubenswrapper[4660]: Trace[553468996]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (07:15:23.932) Nov 29 07:15:23 crc kubenswrapper[4660]: Trace[553468996]: [10.004963292s] [10.004963292s] END Nov 29 07:15:23 crc kubenswrapper[4660]: E1129 07:15:23.932517 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.075550 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.075635 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.237727 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.237808 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.246441 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.246498 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.247389 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.248351 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.248471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:24 crc kubenswrapper[4660]: I1129 07:15:24.248558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.124427 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.125567 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.126984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.127009 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.127017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.148804 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.252077 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.253249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.253280 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.253290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:26 crc kubenswrapper[4660]: I1129 07:15:26.271158 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 29 07:15:27 crc kubenswrapper[4660]: I1129 07:15:27.253974 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:27 crc kubenswrapper[4660]: I1129 07:15:27.254940 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:27 crc kubenswrapper[4660]: I1129 07:15:27.255052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:27 crc kubenswrapper[4660]: I1129 07:15:27.255204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.301971 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.302134 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.302491 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.302538 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.303398 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.303442 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.303455 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:28 crc kubenswrapper[4660]: I1129 07:15:28.308512 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.240134 4660 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.240965 4660 trace.go:236] Trace[2132759154]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:15:18.715) (total time: 10525ms): Nov 29 07:15:29 crc kubenswrapper[4660]: Trace[2132759154]: ---"Objects listed" error: 10525ms (07:15:29.240) Nov 29 07:15:29 crc kubenswrapper[4660]: Trace[2132759154]: [10.525801452s] [10.525801452s] END Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.241002 4660 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.242756 4660 trace.go:236] Trace[953597065]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Nov-2025 07:15:18.456) (total time: 10785ms): Nov 29 07:15:29 crc kubenswrapper[4660]: Trace[953597065]: ---"Objects listed" error: 10785ms (07:15:29.242) Nov 29 07:15:29 crc kubenswrapper[4660]: Trace[953597065]: [10.785924531s] [10.785924531s] END Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.242779 4660 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.243626 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.250788 4660 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.258247 4660 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.258477 4660 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.259221 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.259262 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.259274 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.259291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.259302 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.289839 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.293381 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.293419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.293427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.293445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.293455 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.296447 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.303897 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.306495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.306520 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.306528 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.306567 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.306575 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.310642 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.314386 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.317569 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.317595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.317605 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.317637 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.317645 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.331781 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.335917 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.335943 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.335951 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.335967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.335976 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.350232 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.350385 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.352105 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.352134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.352143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.352160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.352169 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.453080 4660 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.454580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.454646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.454662 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.454684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.454697 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.557086 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.557136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.557149 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.557170 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.557181 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.630625 4660 apiserver.go:52] "Watching apiserver" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.638984 4660 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.639326 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-sqtc9","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.639683 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.639717 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.639732 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.639814 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.640021 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.640134 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.640247 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.640259 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.640301 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.640630 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.644245 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.645827 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.646446 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.646786 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.650595 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.650939 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.655703 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.655999 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.656635 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.656796 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.657057 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.659070 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.659115 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.659127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.659146 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.659158 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.661948 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.708645 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.716571 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.723414 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.734986 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.736668 4660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.746079 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752181 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752262 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752295 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752349 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752750 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752794 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752822 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752852 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.752995 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753092 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753003 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753023 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753174 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753171 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753325 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753192 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753462 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753489 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753514 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753562 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753591 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753604 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753663 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753694 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753725 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.753725 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.754507 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:15:30.254488957 +0000 UTC m=+20.808018856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754082 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754549 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754568 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754586 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754602 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754635 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754651 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754668 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754684 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754701 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754717 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754733 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754749 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754764 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754778 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754793 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754809 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754823 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754837 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754852 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754870 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754889 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754921 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754941 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754956 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754970 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.754993 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755008 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755026 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755041 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755056 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755071 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755539 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755636 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755854 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.755919 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756016 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756091 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756155 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756214 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756274 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756338 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756400 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756483 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756550 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756628 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756703 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756768 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756830 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756890 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.756949 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757017 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757078 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757139 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757200 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757261 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757325 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757388 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757449 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757510 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757573 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757740 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757809 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757924 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757989 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758049 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758116 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758176 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758236 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758298 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758359 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758424 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760546 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760678 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760743 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760807 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760880 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.761009 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.761082 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763682 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763866 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.764062 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765387 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765426 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765450 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765468 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765485 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765502 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765526 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765546 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765568 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765594 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765631 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765649 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765668 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765687 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765708 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765734 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765754 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765770 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765787 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765804 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765820 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765837 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765887 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765912 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757700 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765938 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757889 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.757927 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758049 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758060 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758092 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.758765 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.759012 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.759226 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.759484 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.759882 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.759977 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760195 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760351 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760582 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760551 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760698 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.760787 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.761160 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.761483 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.761549 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.762732 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.762774 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.762886 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.762950 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.762967 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.762860 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763308 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763322 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763342 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763435 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763449 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763493 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763520 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763871 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.764033 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.764213 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.764354 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.764570 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.764968 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765017 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765094 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765377 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765693 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765710 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765834 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765892 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.766299 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.766708 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.766747 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.766847 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.767325 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.767478 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.767517 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.767562 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.767776 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768075 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768086 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768117 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768487 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768499 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768545 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768669 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.768687 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.769017 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.769034 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.769283 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.769430 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.769765 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770072 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770108 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770335 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.765963 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770403 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770428 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770447 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770465 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770481 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770496 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770511 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770527 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770545 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770583 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770602 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770633 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770641 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770699 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770730 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770757 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770813 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770836 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770861 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770884 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770904 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770929 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770951 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770971 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.770992 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771014 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771034 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771054 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771074 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771094 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771115 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771139 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771160 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771184 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771207 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771230 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771254 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771275 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771292 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771308 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771323 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771339 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771355 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771375 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771396 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771416 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771435 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771455 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771475 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771498 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771520 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771525 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.771543 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.772156 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.772280 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.772546 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.772630 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.772686 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.772855 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773030 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773070 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773099 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773292 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773320 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773337 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773679 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773719 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773745 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773789 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773814 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773830 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773851 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773885 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773916 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773944 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773968 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773992 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774018 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774070 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/df7fd3a7-a7ba-4231-92bc-accc35c6d70c-hosts-file\") pod \"node-resolver-sqtc9\" (UID: \"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\") " pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774096 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774124 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774150 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774178 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsz8\" (UniqueName: \"kubernetes.io/projected/df7fd3a7-a7ba-4231-92bc-accc35c6d70c-kube-api-access-qhsz8\") pod \"node-resolver-sqtc9\" (UID: \"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\") " pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774206 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774231 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774257 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774279 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774307 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774327 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774362 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774391 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774417 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774441 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774467 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774536 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774551 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774565 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774579 4660 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774593 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774606 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774640 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774654 4660 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774670 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774683 4660 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774695 4660 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774708 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774720 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774732 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774745 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774757 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774770 4660 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774783 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774795 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774808 4660 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774819 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774831 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774846 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774859 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774872 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774885 4660 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774897 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774943 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774959 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774973 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774985 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.774998 4660 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775010 4660 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775022 4660 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775034 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775048 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775061 4660 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775073 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775086 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775098 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775111 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775126 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775140 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775154 4660 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775168 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775180 4660 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775192 4660 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775206 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775218 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775231 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775244 4660 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775262 4660 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775276 4660 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775287 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775300 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775329 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775342 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775354 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775366 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775381 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775394 4660 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775599 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775633 4660 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775647 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775660 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775673 4660 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775685 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775698 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775710 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775724 4660 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775737 4660 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775751 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775763 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775775 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775788 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775801 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775812 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775826 4660 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775838 4660 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775849 4660 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775861 4660 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775874 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775888 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.780853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.780881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.780890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.780908 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.780919 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.763679 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773067 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773140 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773283 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.773531 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.775839 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.777544 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.777817 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.778363 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.778679 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.778682 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.778986 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.779036 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.779125 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.779188 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.780660 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.781080 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.781274 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.781280 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.782228 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.782826 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.782822 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.783036 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.783096 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.783562 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.783823 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.783993 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.784286 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.784311 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.784568 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.784743 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.784893 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.785148 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.785753 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.786286 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.788474 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.788843 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.789075 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.789318 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.789564 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.789730 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.789819 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.789868 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.790293 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.790570 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.791024 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.797276 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.797474 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.797600 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.798889 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.799232 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.799589 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.800085 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.801344 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.801574 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.803951 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.804073 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.804190 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.804307 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.804410 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.804873 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.804944 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.808822 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.823977 4660 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.830756 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.831498 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.809303 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.814139 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.818982 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.819047 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.819181 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.819305 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.819884 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.821019 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.821252 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.821366 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.821382 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.821645 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.823116 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.823403 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.824314 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.825281 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.825442 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.825670 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.825919 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.826003 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.826317 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.826469 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.826515 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.832841 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:30.332825614 +0000 UTC m=+20.886355513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.826841 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.827209 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.829138 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.829285 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.829309 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.829583 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.829863 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.829959 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.830278 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.830528 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.832945 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:30.332939017 +0000 UTC m=+20.886468916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.831357 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.831828 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.833875 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.834653 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.834769 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.837339 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.840576 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.840882 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.844577 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.845207 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.846403 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.846425 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.846438 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.846485 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:30.346470349 +0000 UTC m=+20.900000248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.852655 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.852692 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.852741 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.852758 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:29 crc kubenswrapper[4660]: E1129 07:15:29.852838 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:30.352817804 +0000 UTC m=+20.906347753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.858911 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.859168 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.859496 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.860807 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.861925 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.863420 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.866188 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876145 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876176 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/df7fd3a7-a7ba-4231-92bc-accc35c6d70c-hosts-file\") pod \"node-resolver-sqtc9\" (UID: \"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\") " pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876192 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876217 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhsz8\" (UniqueName: \"kubernetes.io/projected/df7fd3a7-a7ba-4231-92bc-accc35c6d70c-kube-api-access-qhsz8\") pod \"node-resolver-sqtc9\" (UID: \"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\") " pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876266 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876276 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876287 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876297 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876305 4660 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876314 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876322 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876331 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876339 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876349 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876358 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876366 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876402 4660 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876411 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876419 4660 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876428 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876435 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876443 4660 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876453 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876461 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876469 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876476 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876484 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876493 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876502 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876511 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876519 4660 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876551 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876560 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876568 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876576 4660 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876584 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876591 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876599 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876619 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876626 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876634 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876641 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876650 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876658 4660 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876668 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876676 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876683 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876691 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876698 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876706 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876713 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876721 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876728 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876736 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876743 4660 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876750 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876758 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876767 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876775 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876783 4660 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876791 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876800 4660 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876808 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876816 4660 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876824 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876832 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876886 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876895 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876903 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876911 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876924 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876933 4660 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876941 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876948 4660 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876956 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876965 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876973 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876980 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876988 4660 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.876996 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877005 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877013 4660 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877021 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877029 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877036 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877044 4660 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877052 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877059 4660 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877067 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877075 4660 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877084 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877091 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877100 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877107 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877115 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877126 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877133 4660 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877140 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877147 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877156 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877164 4660 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877172 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877180 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877189 4660 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877196 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877204 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877212 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877220 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877231 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877241 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877250 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877260 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877270 4660 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877386 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/df7fd3a7-a7ba-4231-92bc-accc35c6d70c-hosts-file\") pod \"node-resolver-sqtc9\" (UID: \"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\") " pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877422 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.877471 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.889279 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.889462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.889535 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.889592 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.889660 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:29Z","lastTransitionTime":"2025-11-29T07:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.891307 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.897461 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.900550 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.910571 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhsz8\" (UniqueName: \"kubernetes.io/projected/df7fd3a7-a7ba-4231-92bc-accc35c6d70c-kube-api-access-qhsz8\") pod \"node-resolver-sqtc9\" (UID: \"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\") " pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.914226 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.921859 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.933945 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.947732 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.955711 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.962801 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.966929 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.968756 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.974771 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sqtc9" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.977684 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.977757 4660 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.980384 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:29 crc kubenswrapper[4660]: W1129 07:15:29.983082 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b16fcdbc84e4640ecbbd2cbf4d1397f18a600198b261127a25dc99148f1a4cca WatchSource:0}: Error finding container b16fcdbc84e4640ecbbd2cbf4d1397f18a600198b261127a25dc99148f1a4cca: Status 404 returned error can't find the container with id b16fcdbc84e4640ecbbd2cbf4d1397f18a600198b261127a25dc99148f1a4cca Nov 29 07:15:29 crc kubenswrapper[4660]: I1129 07:15:29.992243 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.005926 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.008095 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.008129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.008139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.008157 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.008167 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.025221 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.050039 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.079375 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.101857 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.111605 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.111661 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.111672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.111688 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.111701 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.120098 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.214008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.214038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.214048 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.214063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.214074 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.261604 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sqtc9" event={"ID":"df7fd3a7-a7ba-4231-92bc-accc35c6d70c","Type":"ContainerStarted","Data":"77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.261667 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sqtc9" event={"ID":"df7fd3a7-a7ba-4231-92bc-accc35c6d70c","Type":"ContainerStarted","Data":"d412e49f8320f8fcb573fd208602b4df0c20b0a63f6414eb3288847e21c5f30c"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.267103 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.267155 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b16fcdbc84e4640ecbbd2cbf4d1397f18a600198b261127a25dc99148f1a4cca"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.268816 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.268853 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"be2a470293721333ca9887bcadd6efc767a9d8dfffb634c264aeb637216e7ff6"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.270918 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"47a41e805661b590b6b773e6137fc4ead26a9d05cff7eb40cdd7c5368bc6cc88"} Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.277665 4660 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.281698 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.281882 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:15:31.281867056 +0000 UTC m=+21.835396955 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.283918 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.295843 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.309106 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.321962 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.322558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.322597 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.322631 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.322646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.322655 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.335825 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.345189 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.358390 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.368185 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.383220 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.383276 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.383325 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.383362 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383485 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383506 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383520 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383575 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:31.383557105 +0000 UTC m=+21.937087004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383662 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383677 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383687 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.383635 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383715 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:31.383706089 +0000 UTC m=+21.937235988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383755 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.383783 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:31.383774611 +0000 UTC m=+21.937304510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.384515 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.384560 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:31.384549899 +0000 UTC m=+21.938079818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.425244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.425287 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.425295 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.425308 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.425319 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.527777 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.527825 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.527838 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.527853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.527862 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.630087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.630148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.630160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.630199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.630212 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.693249 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:30 crc kubenswrapper[4660]: E1129 07:15:30.693368 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.732792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.732824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.732833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.732845 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.732855 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.837941 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.837984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.837996 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.838012 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.838023 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.940501 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.940562 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.940575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.940593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:30 crc kubenswrapper[4660]: I1129 07:15:30.940637 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:30Z","lastTransitionTime":"2025-11-29T07:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.043101 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.043162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.043173 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.043190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.043203 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.145210 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.145252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.145262 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.145277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.145287 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.246890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.246941 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.246950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.246969 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.246980 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.274713 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.292030 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.292223 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:15:33.292195427 +0000 UTC m=+23.845725366 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.293868 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.308974 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.321176 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.332563 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.348981 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.349014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.349024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.349037 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.349045 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.351222 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.363060 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.380354 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.392685 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.392733 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.392775 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.392793 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.392834 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.392907 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:33.392889922 +0000 UTC m=+23.946419821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393526 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393545 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393545 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393555 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393585 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:33.393576269 +0000 UTC m=+23.947106168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393599 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:33.393592439 +0000 UTC m=+23.947122338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393636 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393663 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393673 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.393742 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:33.393715262 +0000 UTC m=+23.947245161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.413357 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.441731 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.450479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.450529 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.450538 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.450553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.450565 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.477454 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.494234 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bjw9w"] Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.494826 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.497799 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.497835 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.497876 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.497988 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.505058 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.514357 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.530150 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.553781 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.553993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.554015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.554027 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.554043 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.554055 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.574144 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.587335 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-99mtq"] Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.587728 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.591602 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.591964 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.592061 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.592522 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.594620 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f4a7492-b946-4db3-b301-0b860ed7cce1-proxy-tls\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.594656 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f4a7492-b946-4db3-b301-0b860ed7cce1-mcd-auth-proxy-config\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.594676 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f4a7492-b946-4db3-b301-0b860ed7cce1-rootfs\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.594698 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5sjw\" (UniqueName: \"kubernetes.io/projected/0f4a7492-b946-4db3-b301-0b860ed7cce1-kube-api-access-g5sjw\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.600280 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.604278 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.606969 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-g8fkc"] Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.607554 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.609088 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.609480 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.617032 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xvjdn"] Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.617378 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.617434 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.621127 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.631010 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.643729 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.655465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.655513 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.655525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.655544 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.655556 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.659572 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.673035 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.682831 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgvps"] Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.683521 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.684761 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.689884 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.691053 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.691470 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.691493 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.691480 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.691474 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.692654 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.692658 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.693044 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.693161 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695202 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnm7l\" (UniqueName: \"kubernetes.io/projected/58b9294e-0d4f-4671-b4ad-513b428cc45d-kube-api-access-qnm7l\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695328 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5sjw\" (UniqueName: \"kubernetes.io/projected/0f4a7492-b946-4db3-b301-0b860ed7cce1-kube-api-access-g5sjw\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695514 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-cni-multus\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695635 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-os-release\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695705 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-socket-dir-parent\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695794 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695903 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-k8s-cni-cncf-io\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.695985 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4h2\" (UniqueName: \"kubernetes.io/projected/e71cb583-cccf-4345-8695-0d3a6c237a35-kube-api-access-4v4h2\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696071 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f4a7492-b946-4db3-b301-0b860ed7cce1-proxy-tls\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696156 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f4a7492-b946-4db3-b301-0b860ed7cce1-mcd-auth-proxy-config\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696250 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-cni-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696345 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-cni-bin\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696433 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-conf-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696527 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-multus-certs\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696653 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cni-binary-copy\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696767 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-os-release\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696845 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-hostroot\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696914 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-daemon-config\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696986 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f4a7492-b946-4db3-b301-0b860ed7cce1-mcd-auth-proxy-config\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.696988 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-system-cni-dir\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697046 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697066 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-kubelet\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697097 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cnibin\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697112 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-netns\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697128 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-etc-kubernetes\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697146 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697162 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2p2t\" (UniqueName: \"kubernetes.io/projected/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-kube-api-access-b2p2t\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697223 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-system-cni-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697271 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-cnibin\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697305 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e71cb583-cccf-4345-8695-0d3a6c237a35-cni-binary-copy\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697333 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f4a7492-b946-4db3-b301-0b860ed7cce1-rootfs\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.697384 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f4a7492-b946-4db3-b301-0b860ed7cce1-rootfs\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.699947 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.706143 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.706170 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f4a7492-b946-4db3-b301-0b860ed7cce1-proxy-tls\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.706890 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.708145 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.708857 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.709979 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.710566 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.711251 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.712231 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.712884 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.715070 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.715744 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.717956 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.718590 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.719184 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.720722 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.721344 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.723419 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.724016 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.724910 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.726083 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.726401 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.727149 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5sjw\" (UniqueName: \"kubernetes.io/projected/0f4a7492-b946-4db3-b301-0b860ed7cce1-kube-api-access-g5sjw\") pod \"machine-config-daemon-bjw9w\" (UID: \"0f4a7492-b946-4db3-b301-0b860ed7cce1\") " pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.728105 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.728782 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.729710 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.730408 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.731308 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.731952 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.734790 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.735338 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.736367 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.736905 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.737926 4660 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.738086 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.741710 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.742312 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.744056 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.744935 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.745914 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.747216 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.749564 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.750566 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.752452 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.753162 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.755579 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.756573 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.757880 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.758441 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.758708 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.758742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.758756 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.758773 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.758783 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.760452 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.761180 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.762556 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.763090 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.764860 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.765741 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.766833 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.767071 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.768520 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.769395 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.798514 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4h2\" (UniqueName: \"kubernetes.io/projected/e71cb583-cccf-4345-8695-0d3a6c237a35-kube-api-access-4v4h2\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.798797 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-cni-bin\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.798869 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-cni-bin\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799008 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-conf-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799092 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799406 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-conf-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799535 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-multus-certs\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799655 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-ovn\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799784 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799901 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-cni-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.800000 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-cni-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.799572 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-multus-certs\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.800218 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-os-release\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.800001 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-os-release\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801159 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-hostroot\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801242 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-hostroot\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801322 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-daemon-config\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801413 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-config\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801515 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cni-binary-copy\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801691 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-kubelet\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801763 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-kubelet\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801859 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-system-cni-dir\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801966 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802060 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-netns\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802138 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-netns\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802144 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-etc-kubernetes\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802201 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cnibin\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802221 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802241 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-systemd-units\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.801890 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-system-cni-dir\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802262 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2p2t\" (UniqueName: \"kubernetes.io/projected/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-kube-api-access-b2p2t\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802279 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-netns\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802335 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-systemd\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802366 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-node-log\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.802392 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802392 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-log-socket\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802481 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cnibin\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802514 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-cnibin\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: E1129 07:15:31.802538 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:15:32.302526159 +0000 UTC m=+22.856056058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802550 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e71cb583-cccf-4345-8695-0d3a6c237a35-cni-binary-copy\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802561 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-daemon-config\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802569 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-etc-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802567 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cni-binary-copy\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802597 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802639 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01aa307a-c2ec-4ded-8677-da549fbfba76-ovn-node-metrics-cert\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802647 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-cnibin\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802666 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-system-cni-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802699 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-system-cni-dir\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802708 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-bin\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802735 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnm7l\" (UniqueName: \"kubernetes.io/projected/58b9294e-0d4f-4671-b4ad-513b428cc45d-kube-api-access-qnm7l\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802757 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-slash\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802791 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802826 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szm8g\" (UniqueName: \"kubernetes.io/projected/01aa307a-c2ec-4ded-8677-da549fbfba76-kube-api-access-szm8g\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802847 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-cni-multus\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802862 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-var-lib-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802878 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-kubelet\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802894 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-script-lib\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802926 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-os-release\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802943 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802958 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-socket-dir-parent\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802976 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-k8s-cni-cncf-io\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.802992 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-netd\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803010 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-env-overrides\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803079 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e71cb583-cccf-4345-8695-0d3a6c237a35-cni-binary-copy\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803149 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-var-lib-cni-multus\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803194 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-os-release\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803233 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-host-run-k8s-cni-cncf-io\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803274 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-multus-socket-dir-parent\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803533 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.803603 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.804404 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e71cb583-cccf-4345-8695-0d3a6c237a35-etc-kubernetes\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.808984 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.816969 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.819744 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2p2t\" (UniqueName: \"kubernetes.io/projected/33ca2e94-4023-4f1d-a2bd-0b990aa9c128-kube-api-access-b2p2t\") pod \"multus-additional-cni-plugins-g8fkc\" (UID: \"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\") " pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.820688 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4h2\" (UniqueName: \"kubernetes.io/projected/e71cb583-cccf-4345-8695-0d3a6c237a35-kube-api-access-4v4h2\") pod \"multus-99mtq\" (UID: \"e71cb583-cccf-4345-8695-0d3a6c237a35\") " pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.822982 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnm7l\" (UniqueName: \"kubernetes.io/projected/58b9294e-0d4f-4671-b4ad-513b428cc45d-kube-api-access-qnm7l\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.829225 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.860746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.860774 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.860781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.860794 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.860802 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.863767 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.882568 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.897268 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.899693 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-99mtq" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.903911 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-kubelet\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.903950 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-script-lib\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.903993 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-netd\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904012 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-env-overrides\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904020 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-kubelet\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904078 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-ovn\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904031 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-ovn\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904127 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904169 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-netd\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904173 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-config\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904234 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-systemd-units\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904253 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-netns\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904267 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-systemd\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904678 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-systemd\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904695 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-systemd-units\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904792 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-netns\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904793 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904818 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-node-log\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904839 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-log-socket\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904863 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-etc-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904880 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904898 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01aa307a-c2ec-4ded-8677-da549fbfba76-ovn-node-metrics-cert\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904916 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-bin\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904940 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-slash\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.904952 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-log-socket\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905017 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-node-log\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905059 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905096 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szm8g\" (UniqueName: \"kubernetes.io/projected/01aa307a-c2ec-4ded-8677-da549fbfba76-kube-api-access-szm8g\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905116 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-var-lib-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905206 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-config\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905215 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-script-lib\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905205 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-env-overrides\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905360 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-etc-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905365 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-slash\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905395 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905408 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-bin\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905395 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-var-lib-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.905439 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-openvswitch\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.912253 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01aa307a-c2ec-4ded-8677-da549fbfba76-ovn-node-metrics-cert\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.920338 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.924739 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szm8g\" (UniqueName: \"kubernetes.io/projected/01aa307a-c2ec-4ded-8677-da549fbfba76-kube-api-access-szm8g\") pod \"ovnkube-node-qgvps\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.934948 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.950662 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.963641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.963672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.963682 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.963694 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.963704 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:31Z","lastTransitionTime":"2025-11-29T07:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:31 crc kubenswrapper[4660]: I1129 07:15:31.995177 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.014162 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.016137 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.038296 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.057211 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.066767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.066797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.066806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.066819 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.066827 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.074836 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.093559 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.113476 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.130065 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.152094 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.176192 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.176236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.176247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.176264 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.176277 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.187793 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.215189 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.238431 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.254145 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.278833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.278868 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.278880 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.278896 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.278907 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.280708 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.280746 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.280758 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"31623fca121b708ced33ee4c1fc53dc8e8651558b692a2f7cdff440773bbde37"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.282853 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.285324 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31" exitCode=0 Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.285381 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.288496 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"0097a6aa4cab3a22e09a1bb5a3dcc2228565b3b4e7aa8ddf403f0cfd96815434"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.291978 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerStarted","Data":"594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.292031 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerStarted","Data":"cbf071c11bcf0358cc1a85646b193c01479002524958c420e7da5f802201ddea"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.297673 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerStarted","Data":"a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.297850 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerStarted","Data":"6a7a2fdc376ea7872aadea1be9968e9fc1a3f4043d8df535c4a4b748910c1550"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.308024 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:32 crc kubenswrapper[4660]: E1129 07:15:32.308179 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:32 crc kubenswrapper[4660]: E1129 07:15:32.308244 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:15:33.308230058 +0000 UTC m=+23.861759957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.328458 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.347909 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.374588 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.381082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.381116 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.381127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.381143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.381152 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.403342 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.428469 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.456908 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.483311 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.483354 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.483367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.483382 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.483393 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.491294 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.510527 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.533715 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.562484 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.585071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.585103 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.585114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.585128 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.585139 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.632075 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.689601 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.689655 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.689668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.689684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.689694 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.693024 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:32 crc kubenswrapper[4660]: E1129 07:15:32.693145 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.707976 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.741325 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.761880 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.791863 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.791893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.791903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.791920 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.791932 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.803234 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.865127 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.894031 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.894053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.894061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.894073 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.894081 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.894740 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.914703 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.927071 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.948013 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.966433 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:32Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.996417 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.996444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.996451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.996463 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:32 crc kubenswrapper[4660]: I1129 07:15:32.996472 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:32Z","lastTransitionTime":"2025-11-29T07:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.008877 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.035675 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.066810 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.085188 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.098280 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.098315 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.098327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.098343 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.098356 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.114081 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.132072 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.166092 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.200139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.200179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.200189 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.200209 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.200219 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.301458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.301483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.301493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.301508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.301519 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.302837 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.302869 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.302879 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.302891 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.302899 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.302908 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.304399 4660 generic.go:334] "Generic (PLEG): container finished" podID="33ca2e94-4023-4f1d-a2bd-0b990aa9c128" containerID="594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17" exitCode=0 Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.304449 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerDied","Data":"594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.318133 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.318261 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:15:37.318242541 +0000 UTC m=+27.871772440 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.318242 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.318372 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.318406 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:15:35.318399575 +0000 UTC m=+25.871929474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.325940 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.340887 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.352779 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.363596 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.377962 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.392162 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.403549 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.404064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.404090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.404098 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.404113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.404121 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.420989 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.421037 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.421055 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.421107 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.421202 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.421218 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.421227 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.421268 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:37.421255692 +0000 UTC m=+27.974785591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.421821 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.421906 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.421932 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:37.421925329 +0000 UTC m=+27.975455218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.422329 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.422345 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.422356 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.422427 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:37.4223786 +0000 UTC m=+27.975908499 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.422458 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.422484 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:37.422477413 +0000 UTC m=+27.976007312 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.439312 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.450258 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.466362 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.478373 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.492067 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.506015 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:33Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.507459 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.507509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.507523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.507542 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.507560 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.610464 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.610544 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.610553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.610566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.610575 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.693061 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.693116 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.693167 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.693188 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.693256 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:33 crc kubenswrapper[4660]: E1129 07:15:33.693323 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.713114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.713151 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.713162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.713177 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.713189 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.815555 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.815596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.815620 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.815635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.815647 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.918161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.918233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.918253 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.918272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:33 crc kubenswrapper[4660]: I1129 07:15:33.918284 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:33Z","lastTransitionTime":"2025-11-29T07:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.020930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.020962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.020969 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.020983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.020992 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.124088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.124127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.124139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.124161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.124174 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.226368 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.226407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.226416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.226429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.226438 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.309405 4660 generic.go:334] "Generic (PLEG): container finished" podID="33ca2e94-4023-4f1d-a2bd-0b990aa9c128" containerID="5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457" exitCode=0 Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.309463 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerDied","Data":"5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.334114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.334547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.334556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.334575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.334585 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.335027 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.347671 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.368244 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.385260 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.398572 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.418530 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.435807 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.437573 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.437642 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.437653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.437670 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.437681 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.448493 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.458567 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.470747 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.482773 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.495197 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.506280 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.519803 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:34Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.540417 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.540473 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.540486 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.540504 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.540516 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.642924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.642954 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.642962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.642978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.642987 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.693447 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:34 crc kubenswrapper[4660]: E1129 07:15:34.693587 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.750129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.750185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.750196 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.750213 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.750227 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.852600 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.852645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.852653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.852666 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.852675 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.955428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.955474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.955483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.955498 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:34 crc kubenswrapper[4660]: I1129 07:15:34.955508 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:34Z","lastTransitionTime":"2025-11-29T07:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.057577 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.057602 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.057631 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.057645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.057654 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.160048 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.160082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.160090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.160103 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.160112 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.262410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.262457 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.262471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.262489 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.262501 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.314405 4660 generic.go:334] "Generic (PLEG): container finished" podID="33ca2e94-4023-4f1d-a2bd-0b990aa9c128" containerID="71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201" exitCode=0 Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.314466 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerDied","Data":"71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.319371 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.338168 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.338475 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:35 crc kubenswrapper[4660]: E1129 07:15:35.338669 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:35 crc kubenswrapper[4660]: E1129 07:15:35.338733 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:15:39.338714789 +0000 UTC m=+29.892244688 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.357354 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.366244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.366293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.366305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.366327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.366340 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.372207 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.384035 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.398765 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.412435 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.426220 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.448536 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.468568 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.468781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.468820 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.468840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.468851 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.479754 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.492806 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.511835 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.526536 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.552446 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.565560 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:35Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.571766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.571809 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.571821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.571838 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.571850 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.674505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.674537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.674549 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.674567 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.674576 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.695263 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:35 crc kubenswrapper[4660]: E1129 07:15:35.695377 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.695762 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:35 crc kubenswrapper[4660]: E1129 07:15:35.695844 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.695901 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:35 crc kubenswrapper[4660]: E1129 07:15:35.695958 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.776984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.777053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.777083 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.777094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.777103 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.879244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.879270 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.879277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.879289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.879297 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.981961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.981997 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.982007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.982024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:35 crc kubenswrapper[4660]: I1129 07:15:35.982035 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:35Z","lastTransitionTime":"2025-11-29T07:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.084179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.084199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.084207 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.084219 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.084227 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.191191 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.191259 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.191277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.191790 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.191878 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.294945 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.294981 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.294991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.295004 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.295013 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.325286 4660 generic.go:334] "Generic (PLEG): container finished" podID="33ca2e94-4023-4f1d-a2bd-0b990aa9c128" containerID="3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae" exitCode=0 Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.325337 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerDied","Data":"3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.349294 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.367476 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.381806 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.395888 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.397337 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.397379 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.397388 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.397400 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.397409 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.413146 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.438433 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.453316 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.466056 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.475339 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.487188 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.500756 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.500784 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.500926 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.500938 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.500953 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.500961 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.512509 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.523440 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.536625 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.603445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.603481 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.603490 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.603507 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.603517 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.693506 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:36 crc kubenswrapper[4660]: E1129 07:15:36.693719 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.706870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.706930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.706942 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.706970 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.706984 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.726288 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-689qx"] Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.726752 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.731042 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.731167 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.731705 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.731873 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.745631 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.768628 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.782066 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.802021 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.810587 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.810647 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.810660 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.810684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.810698 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.821365 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.851720 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.859848 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c27831a3-624c-4e2a-80d5-f40e47f79e64-host\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.859904 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c27831a3-624c-4e2a-80d5-f40e47f79e64-serviceca\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.859981 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxr6\" (UniqueName: \"kubernetes.io/projected/c27831a3-624c-4e2a-80d5-f40e47f79e64-kube-api-access-spxr6\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.879099 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.895072 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.907086 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.913511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.913549 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.913560 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.913574 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.913585 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:36Z","lastTransitionTime":"2025-11-29T07:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.920545 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.940763 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.951242 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.960676 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxr6\" (UniqueName: \"kubernetes.io/projected/c27831a3-624c-4e2a-80d5-f40e47f79e64-kube-api-access-spxr6\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.960740 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c27831a3-624c-4e2a-80d5-f40e47f79e64-host\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.960761 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c27831a3-624c-4e2a-80d5-f40e47f79e64-serviceca\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.960993 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c27831a3-624c-4e2a-80d5-f40e47f79e64-host\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.961736 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c27831a3-624c-4e2a-80d5-f40e47f79e64-serviceca\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.963130 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.976908 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.981940 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxr6\" (UniqueName: \"kubernetes.io/projected/c27831a3-624c-4e2a-80d5-f40e47f79e64-kube-api-access-spxr6\") pod \"node-ca-689qx\" (UID: \"c27831a3-624c-4e2a-80d5-f40e47f79e64\") " pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:36 crc kubenswrapper[4660]: I1129 07:15:36.990128 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.016144 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.016171 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.016179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.016193 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.016201 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.094218 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-689qx" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.117872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.118731 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.118873 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.118996 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.119056 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.228092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.228114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.228122 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.228136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.228144 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.330163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.330195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.330203 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.330216 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.330226 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.332282 4660 generic.go:334] "Generic (PLEG): container finished" podID="33ca2e94-4023-4f1d-a2bd-0b990aa9c128" containerID="5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc" exitCode=0 Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.332336 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerDied","Data":"5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.335330 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-689qx" event={"ID":"c27831a3-624c-4e2a-80d5-f40e47f79e64","Type":"ContainerStarted","Data":"77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.335384 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-689qx" event={"ID":"c27831a3-624c-4e2a-80d5-f40e47f79e64","Type":"ContainerStarted","Data":"e50fe434eb32e5286d0b6641c3f9752413fbe978f8384edeff8e9725541346f1"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.339529 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.340007 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.340038 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.366000 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.366312 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:15:45.36628793 +0000 UTC m=+35.919817829 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.366368 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.367322 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.367966 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.389047 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.403307 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.420290 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.431574 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.435190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.435224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.435233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.435247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.435256 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.443280 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.455378 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.467218 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.467300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.467339 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.467364 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468461 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468469 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468546 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:45.468530522 +0000 UTC m=+36.022060421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468561 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:45.468554463 +0000 UTC m=+36.022084362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468649 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468679 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468693 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468873 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468946 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.468958 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.469013 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:45.468997263 +0000 UTC m=+36.022527162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.469054 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:15:45.469047955 +0000 UTC m=+36.022577854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.470880 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.491064 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.502737 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.517377 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.537249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.537283 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.537292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.537305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.537313 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.539874 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.553724 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.567784 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.584530 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.597342 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.614522 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.628457 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.639189 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.639226 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.639237 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.639252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.639264 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.643816 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.655801 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.668878 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.684018 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.692882 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.692956 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.693002 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.693088 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.692956 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:37 crc kubenswrapper[4660]: E1129 07:15:37.693158 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.698277 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.710092 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.721742 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.733654 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.741698 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.741729 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.741738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.741751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.741761 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.747251 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.759629 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.774045 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.786686 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.844621 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.844656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.844667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.844682 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.844693 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.947119 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.947421 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.947531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.947672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:37 crc kubenswrapper[4660]: I1129 07:15:37.947786 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:37Z","lastTransitionTime":"2025-11-29T07:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.049504 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.049761 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.049840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.049913 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.050023 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.152533 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.152582 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.152596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.152635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.152649 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.156765 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.254674 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.254708 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.254716 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.254731 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.254739 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.357195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.357229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.357241 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.357257 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.357268 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.459726 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.459797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.459816 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.459842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.459861 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.562305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.562357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.562372 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.562390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.562403 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.665263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.665324 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.665336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.665353 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.665364 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.692841 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:38 crc kubenswrapper[4660]: E1129 07:15:38.692989 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.768116 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.768156 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.768168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.768183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.768195 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.870249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.870282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.870291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.870305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.870318 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.972430 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.972479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.972491 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.972508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:38 crc kubenswrapper[4660]: I1129 07:15:38.972521 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:38Z","lastTransitionTime":"2025-11-29T07:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.074511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.074545 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.074553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.074566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.074577 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.177974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.178036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.178051 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.178072 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.178087 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.280830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.280880 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.280890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.280908 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.280920 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.347584 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerStarted","Data":"27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.362294 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.376908 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.383505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.383537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.383550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.383566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.383577 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.384248 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.384374 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.384426 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:15:47.384410929 +0000 UTC m=+37.937940838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.390105 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.402150 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.414037 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.427976 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.440904 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.451679 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.464949 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.479206 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.486234 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.486262 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.486302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.486316 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.486325 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.494699 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.511219 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.525280 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.539279 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.566195 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.588147 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.588175 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.588184 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.588197 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.588205 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.690824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.690866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.690875 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.690889 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.690899 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.693176 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.693273 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.693363 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.693465 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.693606 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.693702 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.704799 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.704826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.704835 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.704849 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.704858 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.712153 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.715813 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.720701 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.720750 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.720764 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.720783 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.720794 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.726684 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.738083 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.740678 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.744002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.744033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.744042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.744055 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.744063 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.754800 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.757326 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.760643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.760695 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.760705 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.760722 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.760734 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.766117 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.778771 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.779067 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.782084 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.782106 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.782114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.782126 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.782134 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.792872 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.796938 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: E1129 07:15:39.797052 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.798421 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.798444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.798454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.798470 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.798480 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.806394 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.817671 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.831820 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.848438 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.861941 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.878465 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.895584 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.899861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.899897 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.899915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.899935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.899950 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:39Z","lastTransitionTime":"2025-11-29T07:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:39 crc kubenswrapper[4660]: I1129 07:15:39.915805 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.001891 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.001929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.001938 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.001952 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.001961 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.104435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.104718 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.104831 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.104938 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.105032 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.207794 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.208064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.208170 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.208283 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.208365 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.310533 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.310596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.310627 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.310645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.310673 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.352884 4660 generic.go:334] "Generic (PLEG): container finished" podID="33ca2e94-4023-4f1d-a2bd-0b990aa9c128" containerID="27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e" exitCode=0 Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.352936 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerDied","Data":"27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.381928 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.395035 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.418579 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.428453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.428500 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.428516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.428541 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.428552 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.436484 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.449214 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.464181 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.475532 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.489951 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.504588 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.518599 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.529580 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.531399 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.531423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.531431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.531444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.531453 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.546001 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.558043 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.570050 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.580374 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:40Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.633198 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.633250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.633261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.633275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.633285 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.693319 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:40 crc kubenswrapper[4660]: E1129 07:15:40.693501 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.735082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.735334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.735493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.735693 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.735817 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.838583 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.838651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.838663 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.838679 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.838690 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.941513 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.941551 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.941561 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.941579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:40 crc kubenswrapper[4660]: I1129 07:15:40.941591 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:40Z","lastTransitionTime":"2025-11-29T07:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.045024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.045054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.045065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.045081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.045092 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.147269 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.147310 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.147321 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.147335 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.147346 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.249188 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.249245 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.249263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.249292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.249310 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.352025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.352065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.352080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.352101 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.352119 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.454572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.454691 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.454712 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.454736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.454756 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.557674 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.557811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.557845 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.557880 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.557906 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.660585 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.660830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.660891 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.660949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.661035 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.695388 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:41 crc kubenswrapper[4660]: E1129 07:15:41.695514 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.695911 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.695944 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:41 crc kubenswrapper[4660]: E1129 07:15:41.696095 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:41 crc kubenswrapper[4660]: E1129 07:15:41.696230 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.763981 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.764017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.764029 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.764048 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.764065 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.867058 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.867105 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.867117 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.867134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.867147 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.970054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.970102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.970126 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.970144 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:41 crc kubenswrapper[4660]: I1129 07:15:41.970155 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:41Z","lastTransitionTime":"2025-11-29T07:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.072068 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.072097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.072105 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.072119 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.072127 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.174330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.174423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.174477 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.174510 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.174571 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.277508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.277557 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.277568 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.277586 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.277599 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.380027 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.380066 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.380078 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.380095 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.380107 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.482958 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.483365 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.483503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.483686 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.483862 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.587462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.588357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.588521 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.588692 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.588847 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.692217 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.692250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.692257 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.692275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.692285 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.692469 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:42 crc kubenswrapper[4660]: E1129 07:15:42.692539 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.794826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.794870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.794882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.794898 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.794910 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.897503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.897572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.897586 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.897643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.897657 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.999867 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.999928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:42 crc kubenswrapper[4660]: I1129 07:15:42.999941 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:42.999960 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:42.999994 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:42Z","lastTransitionTime":"2025-11-29T07:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.103159 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.103243 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.103267 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.103297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.103318 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.208182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.208230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.208242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.208257 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.208269 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.310234 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.310301 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.310312 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.310331 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.310346 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.413246 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.413297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.413312 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.413334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.413349 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.515870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.515925 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.515946 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.515970 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.515986 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.619057 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.619099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.619108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.619124 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.619133 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.692824 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.692948 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:43 crc kubenswrapper[4660]: E1129 07:15:43.692960 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.693083 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:43 crc kubenswrapper[4660]: E1129 07:15:43.693292 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:43 crc kubenswrapper[4660]: E1129 07:15:43.693430 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.721450 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.721497 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.721510 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.721529 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.721543 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.824109 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.824187 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.824219 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.824251 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.824276 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.926962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.927025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.927047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.927076 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:43 crc kubenswrapper[4660]: I1129 07:15:43.927102 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:43Z","lastTransitionTime":"2025-11-29T07:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.029705 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.029750 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.029762 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.029778 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.029792 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.131830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.131860 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.131867 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.131879 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.131888 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.234362 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.234405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.234414 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.234429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.234439 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.337586 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.337695 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.337716 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.337745 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.337775 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.440097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.440140 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.440151 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.440197 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.440208 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.542720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.542747 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.542755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.542767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.542776 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.645111 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.645151 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.645163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.645178 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.645187 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.693396 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:44 crc kubenswrapper[4660]: E1129 07:15:44.693517 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.747331 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.747370 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.747380 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.747394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.747404 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.850053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.850115 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.850128 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.850143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.850155 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.952964 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.952994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.953002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.953016 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:44 crc kubenswrapper[4660]: I1129 07:15:44.953025 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:44Z","lastTransitionTime":"2025-11-29T07:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.054654 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.054901 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.054994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.055085 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.055162 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.158052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.158096 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.158108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.158127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.158141 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.261292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.261707 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.261946 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.262182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.262385 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.365401 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.365470 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.365488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.365514 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.365533 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.459160 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.459352 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:16:01.459316292 +0000 UTC m=+52.012846231 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.468045 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.468081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.468089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.468102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.468117 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.560303 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.560365 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.560393 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.560417 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560523 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560571 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560593 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560537 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560666 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560540 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560702 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560710 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:01.560685382 +0000 UTC m=+52.114215321 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560719 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560777 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:01.560727783 +0000 UTC m=+52.114257722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560800 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:01.560788305 +0000 UTC m=+52.114318244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.560831 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:01.560819126 +0000 UTC m=+52.114349065 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.570591 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.570663 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.570679 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.570702 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.570719 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.672991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.673029 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.673038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.673053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.673062 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.692776 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.692890 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.692972 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.692779 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.693119 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:45 crc kubenswrapper[4660]: E1129 07:15:45.693294 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.775224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.775262 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.775275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.775291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.775300 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.790185 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74"] Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.790556 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.792047 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.792309 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.808446 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.826424 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.838096 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.852249 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.867444 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.877517 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.877555 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.877566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.877584 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.877597 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.882271 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.895332 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.914140 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.932118 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.945252 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.959727 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.964505 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cts6d\" (UniqueName: \"kubernetes.io/projected/24bac20d-6112-403d-b98d-dfe5b13913d7-kube-api-access-cts6d\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.964630 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24bac20d-6112-403d-b98d-dfe5b13913d7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.964698 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24bac20d-6112-403d-b98d-dfe5b13913d7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.964734 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24bac20d-6112-403d-b98d-dfe5b13913d7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.973154 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.979822 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.979875 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.979885 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.979902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.979911 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:45Z","lastTransitionTime":"2025-11-29T07:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:45 crc kubenswrapper[4660]: I1129 07:15:45.987358 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:45Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.003768 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.018448 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.041977 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.065325 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24bac20d-6112-403d-b98d-dfe5b13913d7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.065395 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24bac20d-6112-403d-b98d-dfe5b13913d7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.065443 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cts6d\" (UniqueName: \"kubernetes.io/projected/24bac20d-6112-403d-b98d-dfe5b13913d7-kube-api-access-cts6d\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.065488 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24bac20d-6112-403d-b98d-dfe5b13913d7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.066228 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24bac20d-6112-403d-b98d-dfe5b13913d7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.066279 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24bac20d-6112-403d-b98d-dfe5b13913d7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.076199 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24bac20d-6112-403d-b98d-dfe5b13913d7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.082841 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.082878 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.082887 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.082901 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.082910 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.093205 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cts6d\" (UniqueName: \"kubernetes.io/projected/24bac20d-6112-403d-b98d-dfe5b13913d7-kube-api-access-cts6d\") pod \"ovnkube-control-plane-749d76644c-msq74\" (UID: \"24bac20d-6112-403d-b98d-dfe5b13913d7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.102357 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.185324 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.185374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.185394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.185415 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.185431 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.287850 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.288129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.288138 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.288152 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.288160 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.371462 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" event={"ID":"24bac20d-6112-403d-b98d-dfe5b13913d7","Type":"ContainerStarted","Data":"feb17b794531801317f1ce3cb06d75462f111c126dd543fb2d3aba19ce0d33bc"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.374298 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/0.log" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.379802 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4" exitCode=1 Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.379835 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.380586 4660 scope.go:117] "RemoveContainer" containerID="f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.391237 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.391297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.391319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.391347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.391365 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.399097 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.415527 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.429170 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.443461 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.462306 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.481009 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.493381 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.493631 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.493932 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.494216 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.494923 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.496458 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.507882 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.521963 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.532040 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.543584 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.555803 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.568089 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.585467 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.597033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.597069 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.597081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.597097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.597106 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.599891 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.623183 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489784 5785 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489803 5785 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489826 5785 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489864 5785 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:15:40.489903 5785 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489927 5785 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489987 5785 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:46Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.692845 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:46 crc kubenswrapper[4660]: E1129 07:15:46.693001 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.698998 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.699084 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.699094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.699110 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.699120 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.801243 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.801273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.801574 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.801632 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.801645 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.904949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.905017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.905038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.905065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:46 crc kubenswrapper[4660]: I1129 07:15:46.905083 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:46Z","lastTransitionTime":"2025-11-29T07:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.008195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.008413 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.008473 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.008586 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.008695 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.111090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.111150 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.111168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.111194 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.111213 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.214765 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.214989 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.215048 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.215140 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.215205 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.318405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.318469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.318493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.318522 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.318545 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.420982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.421025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.421034 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.421051 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.421061 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.480604 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:47 crc kubenswrapper[4660]: E1129 07:15:47.481202 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:47 crc kubenswrapper[4660]: E1129 07:15:47.481400 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:16:03.481365278 +0000 UTC m=+54.034895347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.523689 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.523742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.523751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.523770 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.523783 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.626408 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.626739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.626834 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.626928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.627047 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.693435 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.693496 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:47 crc kubenswrapper[4660]: E1129 07:15:47.693567 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:47 crc kubenswrapper[4660]: E1129 07:15:47.693678 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.693705 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:47 crc kubenswrapper[4660]: E1129 07:15:47.693885 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.729258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.729290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.729300 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.729314 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.729324 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.942942 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.942992 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.943007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.943027 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:47 crc kubenswrapper[4660]: I1129 07:15:47.943042 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:47Z","lastTransitionTime":"2025-11-29T07:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.045576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.045683 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.045713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.045767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.045792 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.148428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.148502 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.148525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.148554 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.148574 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.252084 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.252142 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.252161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.252185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.252204 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.357261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.357306 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.357319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.357338 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.357349 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.461057 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.461150 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.461179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.461207 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.461231 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.564025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.564075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.564091 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.564121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.564140 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.666437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.666469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.666484 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.666503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.666516 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.693177 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:48 crc kubenswrapper[4660]: E1129 07:15:48.693522 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.768890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.768959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.768979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.769006 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.769033 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.871966 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.872022 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.872039 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.872063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.872080 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.974929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.974984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.974994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.975015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:48 crc kubenswrapper[4660]: I1129 07:15:48.975028 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:48Z","lastTransitionTime":"2025-11-29T07:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.078092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.078150 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.078162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.078182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.078196 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.181435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.181530 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.181547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.181565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.181576 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.284259 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.284442 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.284518 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.284550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.285333 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.388406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.388493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.388515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.388986 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.389093 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.491870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.491930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.491949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.491974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.491993 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.595357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.595416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.595443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.595474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.595497 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.692769 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.693045 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:49 crc kubenswrapper[4660]: E1129 07:15:49.693085 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:49 crc kubenswrapper[4660]: E1129 07:15:49.693194 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.692695 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:49 crc kubenswrapper[4660]: E1129 07:15:49.693822 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.703306 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.703339 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.703350 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.703367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.703380 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.706895 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.728052 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489784 5785 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489803 5785 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489826 5785 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489864 5785 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:15:40.489903 5785 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489927 5785 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489987 5785 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.739966 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.754896 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.768438 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.780816 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.798712 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.805840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.805879 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.805892 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.805930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.805942 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.819889 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.833565 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.848303 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.862764 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.880852 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.898117 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.908651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.908685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.908696 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.908712 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.908726 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:49Z","lastTransitionTime":"2025-11-29T07:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.915132 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.929567 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:49 crc kubenswrapper[4660]: I1129 07:15:49.946242 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:49Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.011714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.011762 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.011774 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.011790 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.011800 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.114297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.114346 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.114358 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.114374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.114384 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.186302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.186342 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.186353 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.186368 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.186378 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: E1129 07:15:50.206029 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.216779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.216821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.216833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.216850 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.216861 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: E1129 07:15:50.234240 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.237729 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.237758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.237769 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.237785 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.237797 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: E1129 07:15:50.252007 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.255863 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.255998 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.256054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.256113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.256192 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: E1129 07:15:50.267974 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.271317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.271375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.271386 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.271405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.271415 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: E1129 07:15:50.283018 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: E1129 07:15:50.283187 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.285306 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.285347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.285358 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.285395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.285409 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.388349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.388394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.388406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.388419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.388428 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.396748 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" event={"ID":"33ca2e94-4023-4f1d-a2bd-0b990aa9c128","Type":"ContainerStarted","Data":"eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.398135 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" event={"ID":"24bac20d-6112-403d-b98d-dfe5b13913d7","Type":"ContainerStarted","Data":"ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.398159 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" event={"ID":"24bac20d-6112-403d-b98d-dfe5b13913d7","Type":"ContainerStarted","Data":"a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.399834 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/0.log" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.401733 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.402223 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.412991 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.433190 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489784 5785 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489803 5785 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489826 5785 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489864 5785 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:15:40.489903 5785 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489927 5785 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489987 5785 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.446031 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.458553 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.469059 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.481650 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.494808 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.494934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.494959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.494969 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.494984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.494994 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.504779 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.515867 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.528887 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.540110 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.551009 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.563422 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.577853 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.589886 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.597200 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.597264 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.597275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.597312 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.597325 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.603657 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.615431 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.626810 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.637159 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.646944 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.656252 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.667503 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.681167 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.693255 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:50 crc kubenswrapper[4660]: E1129 07:15:50.693360 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.694636 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.699857 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.699896 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.699907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.699923 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.699935 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.705762 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.715409 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.730361 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.742773 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.756552 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.770314 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.780952 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.799302 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489784 5785 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489803 5785 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489826 5785 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489864 5785 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:15:40.489903 5785 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489927 5785 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489987 5785 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:50Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.801458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.801495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.801506 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.801523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.801534 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.903985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.904028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.904040 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.904057 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:50 crc kubenswrapper[4660]: I1129 07:15:50.904070 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:50Z","lastTransitionTime":"2025-11-29T07:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.007036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.007092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.007109 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.007132 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.007147 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:51Z","lastTransitionTime":"2025-11-29T07:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.109365 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.109416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.109431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.109456 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.109469 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:51Z","lastTransitionTime":"2025-11-29T07:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.211868 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.211908 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.211918 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.211933 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.211943 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:51Z","lastTransitionTime":"2025-11-29T07:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.316730 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.316795 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.316810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.316831 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.316846 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:51Z","lastTransitionTime":"2025-11-29T07:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.407560 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/1.log" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.408302 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/0.log" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.411436 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033" exitCode=1 Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.411516 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.411596 4660 scope.go:117] "RemoveContainer" containerID="f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.412328 4660 scope.go:117] "RemoveContainer" containerID="631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033" Nov 29 07:15:51 crc kubenswrapper[4660]: E1129 07:15:51.412471 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.419395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.419423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.419431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.419448 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.419459 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:51Z","lastTransitionTime":"2025-11-29T07:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.435459 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.458749 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489784 5785 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489803 5785 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489826 5785 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489864 5785 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:15:40.489903 5785 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489927 5785 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489987 5785 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.476525 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.490696 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.882339 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:51 crc kubenswrapper[4660]: E1129 07:15:51.882742 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.883224 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.883863 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.883911 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:51 crc kubenswrapper[4660]: E1129 07:15:51.883900 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:51 crc kubenswrapper[4660]: E1129 07:15:51.883989 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:51 crc kubenswrapper[4660]: E1129 07:15:51.884139 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.892228 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.892290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.892312 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.892339 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.892362 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:51Z","lastTransitionTime":"2025-11-29T07:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.897935 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.916035 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.934745 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.950439 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.962201 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.972629 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.982337 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.993877 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.993902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.993911 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.993924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.993932 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:51Z","lastTransitionTime":"2025-11-29T07:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:51 crc kubenswrapper[4660]: I1129 07:15:51.995030 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:51Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.005809 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.019523 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.034556 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.045799 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.057535 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.076419 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.089664 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.096551 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.096579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.096588 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.096632 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.096641 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.102596 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.123011 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f637eabe9d1604ede0becdebae422631ef9616c5d0306fd42c6f0a4518bddbd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"message\\\":\\\"v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489784 5785 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489803 5785 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489826 5785 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:15:40.489864 5785 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:15:40.489903 5785 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489927 5785 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:15:40.489987 5785 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"68] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.5.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1129 07:15:50.609392 6017 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xvjdn in node crc\\\\nI1129 07:15:50.609917 6017 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI1129 07:15:50.609937 6017 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.135649 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.147316 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.157250 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.168897 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.182197 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.194576 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.199654 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.199711 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.199733 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.199755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.199772 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.206627 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.218366 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.231136 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.249596 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.274635 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.301268 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.301494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.301574 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.301677 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.301797 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.404195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.404235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.404246 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.404261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.404273 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.419041 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/1.log" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.426557 4660 scope.go:117] "RemoveContainer" containerID="631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033" Nov 29 07:15:52 crc kubenswrapper[4660]: E1129 07:15:52.426951 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.441904 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.454264 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.464544 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.478022 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.490725 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.505865 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.506704 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.506753 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.506772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.506797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.506815 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.516455 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.530755 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.545144 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.561887 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.584639 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.600324 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.610043 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.610102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.610118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.610138 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.610178 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.618546 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.632003 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.645585 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.672573 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"68] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.5.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1129 07:15:50.609392 6017 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xvjdn in node crc\\\\nI1129 07:15:50.609917 6017 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI1129 07:15:50.609937 6017 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:52Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.712089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.712350 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.712473 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.712606 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.712778 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.814887 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.815148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.815235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.815329 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.815422 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.918452 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.918524 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.918540 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.918570 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:52 crc kubenswrapper[4660]: I1129 07:15:52.918590 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:52Z","lastTransitionTime":"2025-11-29T07:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.021437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.021482 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.021493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.021509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.021521 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.124540 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.124994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.125162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.125323 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.125462 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.228534 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.228604 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.228655 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.228683 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.228707 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.331013 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.331088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.331104 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.331125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.331137 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.433572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.433653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.433670 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.433688 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.433702 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.536229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.536269 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.536280 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.536299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.536314 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.638703 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.638749 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.638758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.638775 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.638785 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.693078 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.693197 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:53 crc kubenswrapper[4660]: E1129 07:15:53.693340 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.693361 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:53 crc kubenswrapper[4660]: E1129 07:15:53.693438 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:53 crc kubenswrapper[4660]: E1129 07:15:53.693499 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.693732 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:53 crc kubenswrapper[4660]: E1129 07:15:53.693938 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.740489 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.740515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.740522 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.740554 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.740563 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.842665 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.842713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.842721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.842735 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.842744 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.945652 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.945692 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.945704 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.945721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:53 crc kubenswrapper[4660]: I1129 07:15:53.945732 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:53Z","lastTransitionTime":"2025-11-29T07:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.047482 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.047507 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.047515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.047527 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.047535 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.149246 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.149522 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.149682 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.149820 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.149947 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.253440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.253477 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.253488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.253505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.253516 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.355402 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.355474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.355486 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.355503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.355515 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.457730 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.457925 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.458017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.458085 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.458141 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.560286 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.560362 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.560384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.560407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.560425 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.662327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.662542 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.662696 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.662827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.662912 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.765915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.765971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.765979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.765993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.766003 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.869082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.869125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.869134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.869149 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.869158 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.971736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.971769 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.971779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.971797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:54 crc kubenswrapper[4660]: I1129 07:15:54.971809 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:54Z","lastTransitionTime":"2025-11-29T07:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.074209 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.074254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.074274 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.074291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.074302 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.177154 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.177194 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.177215 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.177232 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.177243 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.280398 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.280458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.280476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.280501 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.280522 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.383141 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.383196 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.383210 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.383230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.383245 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.485468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.485496 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.485505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.485521 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.485532 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.588253 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.588313 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.588330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.588352 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.588369 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.690960 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.691014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.691047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.691075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.691095 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.693333 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.693335 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.693400 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:55 crc kubenswrapper[4660]: E1129 07:15:55.693535 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.693583 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:55 crc kubenswrapper[4660]: E1129 07:15:55.693683 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:55 crc kubenswrapper[4660]: E1129 07:15:55.693747 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:55 crc kubenswrapper[4660]: E1129 07:15:55.693810 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.793840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.793912 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.793925 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.793943 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.793982 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.895866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.896113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.896252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.896391 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.896483 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:55Z","lastTransitionTime":"2025-11-29T07:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:55 crc kubenswrapper[4660]: I1129 07:15:55.999454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.000053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.000136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.000254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.000333 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.103995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.104055 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.104069 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.104090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.104104 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.207734 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.207812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.207838 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.207869 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.207892 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.312056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.312176 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.312203 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.312286 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.312400 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.415593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.415686 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.415710 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.415740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.415762 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.518031 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.518075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.518087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.518099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.518108 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.620532 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.620557 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.620565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.620594 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.620626 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.723089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.723118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.723126 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.723138 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.723147 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.825821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.825890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.825905 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.825928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.825943 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.928082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.928136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.928145 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.928159 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:56 crc kubenswrapper[4660]: I1129 07:15:56.928169 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:56Z","lastTransitionTime":"2025-11-29T07:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.030422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.030495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.030553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.030692 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.030727 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.132742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.132786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.132795 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.132809 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.132818 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.235478 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.235528 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.235548 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.235571 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.235584 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.337996 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.338035 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.338047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.338063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.338075 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.440184 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.440245 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.440262 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.440284 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.440302 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.543234 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.543293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.543308 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.543326 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.543368 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.645231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.645292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.645305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.645319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.645328 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.693054 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:57 crc kubenswrapper[4660]: E1129 07:15:57.693180 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.693649 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.693672 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:57 crc kubenswrapper[4660]: E1129 07:15:57.693723 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:57 crc kubenswrapper[4660]: E1129 07:15:57.693966 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.694061 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:57 crc kubenswrapper[4660]: E1129 07:15:57.694183 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.747835 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.747892 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.747906 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.747926 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.747938 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.850247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.850316 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.850341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.850373 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.850396 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.953328 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.953372 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.953384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.953401 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:57 crc kubenswrapper[4660]: I1129 07:15:57.953418 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:57Z","lastTransitionTime":"2025-11-29T07:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.056423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.056468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.056478 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.056494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.056505 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.158725 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.158767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.158780 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.158796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.158808 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.261505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.261579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.261713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.261753 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.261785 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.364491 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.364540 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.364553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.364569 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.364581 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.469361 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.469399 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.469409 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.469424 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.469438 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.571877 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.571930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.571941 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.571960 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.571972 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.674605 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.674713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.674737 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.674769 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.674794 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.777855 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.777915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.777934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.777957 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.777975 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.879846 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.879885 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.879895 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.879910 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.879922 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.981985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.982056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.982068 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.982085 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:58 crc kubenswrapper[4660]: I1129 07:15:58.982097 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:58Z","lastTransitionTime":"2025-11-29T07:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.084683 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.084751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.084774 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.084803 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.084824 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.188269 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.188422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.188463 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.188495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.188516 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.290463 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.290519 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.290536 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.290558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.290575 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.392681 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.392731 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.392742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.392758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.392769 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.495190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.495221 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.495229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.495242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.495251 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.598031 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.598099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.598110 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.598133 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.598146 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.693537 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.693582 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:15:59 crc kubenswrapper[4660]: E1129 07:15:59.693809 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.693821 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.693827 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:15:59 crc kubenswrapper[4660]: E1129 07:15:59.693919 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:15:59 crc kubenswrapper[4660]: E1129 07:15:59.693971 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:15:59 crc kubenswrapper[4660]: E1129 07:15:59.696043 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.701725 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.701769 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.701782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.701797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.701812 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.719440 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.734704 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.747982 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.760763 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.774219 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.793206 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"68] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.5.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1129 07:15:50.609392 6017 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xvjdn in node crc\\\\nI1129 07:15:50.609917 6017 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI1129 07:15:50.609937 6017 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.804142 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.804182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.804193 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.804209 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.804219 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.807071 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.822527 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.840792 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.857657 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.870459 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.884931 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.897417 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.908211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.908283 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.908299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.908326 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.908341 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:15:59Z","lastTransitionTime":"2025-11-29T07:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.912400 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.930564 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:15:59 crc kubenswrapper[4660]: I1129 07:15:59.946121 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.010492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.010538 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.010550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.010568 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.010583 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.113342 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.113405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.113416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.113429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.113437 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.215874 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.215917 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.215930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.215946 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.215956 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.318961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.319033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.319058 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.319087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.319109 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.421718 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.421749 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.421758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.421772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.421781 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.523816 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.523865 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.523878 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.523895 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.523906 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.524834 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.524861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.524872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.524887 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.524898 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: E1129 07:16:00.540135 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.544721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.544756 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.544765 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.544797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.544806 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: E1129 07:16:00.561087 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.565060 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.565108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.565121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.565139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.565152 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: E1129 07:16:00.577235 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.580599 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.580669 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.580684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.580705 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.580715 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: E1129 07:16:00.592687 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.596182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.596208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.596217 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.596247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.596257 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: E1129 07:16:00.608460 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:00 crc kubenswrapper[4660]: E1129 07:16:00.608595 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.626179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.626224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.626236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.626256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.626269 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.728664 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.728709 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.728725 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.728741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.728752 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.831236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.831276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.831292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.831314 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.831329 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.934235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.934324 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.934346 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.934377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:00 crc kubenswrapper[4660]: I1129 07:16:00.934401 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:00Z","lastTransitionTime":"2025-11-29T07:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.037263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.037425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.037487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.037512 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.037531 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.140646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.140714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.140739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.140766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.140783 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.243111 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.243143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.243174 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.243189 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.243198 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.346214 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.346255 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.346265 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.346294 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.346312 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.448881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.448943 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.448955 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.448974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.448987 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.491959 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.492305 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:16:33.492265172 +0000 UTC m=+84.045795141 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.551255 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.551293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.551302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.551316 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.551362 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.593315 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.593381 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.593410 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593431 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.593445 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593496 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:33.593478089 +0000 UTC m=+84.147007988 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593558 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593574 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593586 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593641 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593644 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593727 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593667 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:33.593655453 +0000 UTC m=+84.147185362 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593749 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593776 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:33.593750986 +0000 UTC m=+84.147280925 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.593821 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:16:33.593788547 +0000 UTC m=+84.147318486 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.660681 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.660792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.660850 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.660889 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.660914 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.693133 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.693156 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.693183 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.693156 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.693288 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.693476 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.693571 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:01 crc kubenswrapper[4660]: E1129 07:16:01.693675 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.764143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.764237 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.764263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.764282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.764294 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.867872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.867931 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.867948 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.867969 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.867985 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.971547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.971594 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.971633 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.971655 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:01 crc kubenswrapper[4660]: I1129 07:16:01.971668 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:01Z","lastTransitionTime":"2025-11-29T07:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.073844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.073894 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.073903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.073916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.073925 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.175907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.175995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.176007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.176025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.176038 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.278395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.278441 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.278452 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.278469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.278483 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.380124 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.380166 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.380176 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.380191 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.380202 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.482672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.482735 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.482751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.482768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.482779 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.586454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.586504 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.586515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.586534 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.586547 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.689407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.689465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.689477 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.689494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.689505 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.792913 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.792951 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.792961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.792978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.792993 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.895675 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.895729 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.895738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.895751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.895760 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.999251 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.999317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.999330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.999347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:02 crc kubenswrapper[4660]: I1129 07:16:02.999359 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:02Z","lastTransitionTime":"2025-11-29T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.102213 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.102260 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.102272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.102289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.102304 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.204665 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.204740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.204763 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.204793 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.204814 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.307772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.307877 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.307906 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.307935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.307957 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.410667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.410716 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.410729 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.410747 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.410761 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.508561 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:03 crc kubenswrapper[4660]: E1129 07:16:03.508770 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:16:03 crc kubenswrapper[4660]: E1129 07:16:03.508876 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:16:35.508855142 +0000 UTC m=+86.062385111 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.516160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.516201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.516212 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.516235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.516247 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.619385 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.619427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.619439 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.619465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.619479 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.693449 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.693508 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.693469 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.693451 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:03 crc kubenswrapper[4660]: E1129 07:16:03.693579 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:03 crc kubenswrapper[4660]: E1129 07:16:03.693683 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:03 crc kubenswrapper[4660]: E1129 07:16:03.693851 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:03 crc kubenswrapper[4660]: E1129 07:16:03.693924 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.721694 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.721729 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.721752 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.721768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.721778 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.824704 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.824730 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.824738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.824751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.824759 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.926872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.926902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.926910 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.926923 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:03 crc kubenswrapper[4660]: I1129 07:16:03.926930 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:03Z","lastTransitionTime":"2025-11-29T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.031866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.031928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.031950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.031978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.031996 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.134766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.134856 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.134869 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.134892 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.134904 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.237095 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.237169 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.237190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.237222 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.237243 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.340488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.340534 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.340546 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.340563 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.340574 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.443274 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.443320 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.443333 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.443349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.443363 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.545668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.545725 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.545746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.545771 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.545788 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.647814 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.647899 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.647910 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.647932 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.647947 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.693728 4660 scope.go:117] "RemoveContainer" containerID="631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.751265 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.751644 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.751653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.751668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.751678 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.854125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.854169 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.854180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.854195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.854207 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.956877 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.956909 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.956920 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.956937 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:04 crc kubenswrapper[4660]: I1129 07:16:04.956951 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:04Z","lastTransitionTime":"2025-11-29T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.060808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.060879 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.060894 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.060918 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.060932 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.164881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.164934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.164948 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.164967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.164978 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.267273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.267322 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.267334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.267349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.267369 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.369980 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.370033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.370047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.370064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.370077 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.471278 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/1.log" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.471656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.471682 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.471690 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.471703 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.471713 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.473800 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.474192 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.488822 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.501675 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.517847 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.531334 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.547138 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.560080 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.573755 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.574240 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.574310 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.574336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.574354 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.574364 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.584534 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.595256 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.607756 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.620239 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.635465 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.649894 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.667307 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.675926 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.675979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.675990 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.676005 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.676015 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.685452 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"68] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.5.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1129 07:15:50.609392 6017 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xvjdn in node crc\\\\nI1129 07:15:50.609917 6017 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI1129 07:15:50.609937 6017 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.692920 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.692942 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.692935 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.692972 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:05 crc kubenswrapper[4660]: E1129 07:16:05.693045 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:05 crc kubenswrapper[4660]: E1129 07:16:05.693245 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:05 crc kubenswrapper[4660]: E1129 07:16:05.693342 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:05 crc kubenswrapper[4660]: E1129 07:16:05.693422 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.699922 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.778359 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.778393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.778402 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.778416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.778425 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.881375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.881416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.881428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.881444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.881455 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.984955 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.985032 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.985047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.985080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:05 crc kubenswrapper[4660]: I1129 07:16:05.985100 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:05Z","lastTransitionTime":"2025-11-29T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.084010 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.087540 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.087572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.087582 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.087675 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.087718 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.099469 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.100338 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.119920 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"68] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.5.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1129 07:15:50.609392 6017 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xvjdn in node crc\\\\nI1129 07:15:50.609917 6017 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI1129 07:15:50.609937 6017 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.134072 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.149879 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.164303 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.177101 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.189428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.189462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.189473 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.189488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.189501 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.190919 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.200586 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.211163 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.223913 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.239291 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.251590 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.263931 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.277212 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.290337 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.291395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.291453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.291465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.291824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.291850 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.306538 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.393916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.393956 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.393971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.393991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.394007 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.476950 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/2.log" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.477403 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/1.log" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.479917 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85" exitCode=1 Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.479983 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.480061 4660 scope.go:117] "RemoveContainer" containerID="631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.481009 4660 scope.go:117] "RemoveContainer" containerID="8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85" Nov 29 07:16:06 crc kubenswrapper[4660]: E1129 07:16:06.481227 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.493436 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.495960 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.495995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.496004 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.496019 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.496027 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.505710 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.517009 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.528107 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.537974 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.548656 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.557713 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.569275 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.582371 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.593892 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.600133 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.600162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.600208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.600233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.600244 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.608155 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.622018 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.633047 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.645316 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.656540 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.666852 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.684575 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631d74f5c6de6e4949988f95330720160cc974720fdf2042999df7ef3ca62033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"message\\\":\\\"68] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.5.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1129 07:15:50.609392 6017 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xvjdn in node crc\\\\nI1129 07:15:50.609917 6017 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default are: map[]\\\\nI1129 07:15:50.609937 6017 services_controller.go:443] Built service openshift-image-registry/image-registry LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5000, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.702590 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.702667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.702679 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.702693 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.702703 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.812242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.812275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.812503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.812517 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.812526 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.915008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.915075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.915095 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.915114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:06 crc kubenswrapper[4660]: I1129 07:16:06.915160 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:06Z","lastTransitionTime":"2025-11-29T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.017313 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.017355 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.017364 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.017377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.017434 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.120062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.120104 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.120116 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.120134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.120146 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.222929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.223198 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.223271 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.223341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.223410 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.326525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.326567 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.326578 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.326595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.326630 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.428598 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.428661 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.428672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.428686 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.428696 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.484213 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/2.log" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.488327 4660 scope.go:117] "RemoveContainer" containerID="8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85" Nov 29 07:16:07 crc kubenswrapper[4660]: E1129 07:16:07.488518 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.498875 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.511011 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.521468 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.531501 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.531579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.531589 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.531622 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.531636 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.534075 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.546353 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.559235 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.571664 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.582875 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.598801 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.615117 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.630956 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.633781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.633858 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.633868 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.633885 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.633896 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.647530 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.661380 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.682961 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.693115 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.693211 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:07 crc kubenswrapper[4660]: E1129 07:16:07.693329 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.693148 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:07 crc kubenswrapper[4660]: E1129 07:16:07.693447 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:07 crc kubenswrapper[4660]: E1129 07:16:07.693534 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.693770 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:07 crc kubenswrapper[4660]: E1129 07:16:07.693918 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.701697 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.716995 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.729108 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.735752 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.735817 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.735829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.735844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.735853 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.838358 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.838400 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.838437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.838454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.838465 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.940849 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.941282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.941492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.941735 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:07 crc kubenswrapper[4660]: I1129 07:16:07.941984 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:07Z","lastTransitionTime":"2025-11-29T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.043911 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.043941 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.043950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.043962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.043970 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.147018 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.147082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.147107 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.147136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.147160 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.249527 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.249587 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.249598 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.249635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.249646 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.352460 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.352494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.352504 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.352518 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.352531 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.454971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.455268 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.455277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.455293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.455305 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.558961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.559075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.559093 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.559114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.559129 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.662741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.662795 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.662812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.662837 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.662853 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.765440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.765488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.765503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.765521 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.765533 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.868365 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.868415 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.868429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.868453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.868468 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.971000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.971276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.971375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.971466 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:08 crc kubenswrapper[4660]: I1129 07:16:08.971568 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:08Z","lastTransitionTime":"2025-11-29T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.074245 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.074648 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.074777 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.074908 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.075034 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.177893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.177961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.177983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.178010 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.178030 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.281345 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.281392 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.281409 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.281433 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.281449 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.384009 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.384073 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.384099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.384129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.384151 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.489002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.489049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.489064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.489086 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.489103 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.591241 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.591303 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.591322 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.591347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.591365 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.692600 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.692673 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.692645 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.692700 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:09 crc kubenswrapper[4660]: E1129 07:16:09.692769 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:09 crc kubenswrapper[4660]: E1129 07:16:09.692931 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:09 crc kubenswrapper[4660]: E1129 07:16:09.693008 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:09 crc kubenswrapper[4660]: E1129 07:16:09.693190 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.700050 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.700087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.700138 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.700165 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.700216 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.706779 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.719796 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.728845 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.738837 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.753030 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.765170 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.777273 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.791769 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.802509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.802543 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.802551 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.802567 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.802577 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.803486 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.813788 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.825357 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.838807 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.850634 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.865570 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.877307 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.887874 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.898971 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.904269 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.904308 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.904317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.904330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:09 crc kubenswrapper[4660]: I1129 07:16:09.904340 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:09Z","lastTransitionTime":"2025-11-29T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.006759 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.006807 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.006819 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.006836 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.006848 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.109516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.109581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.109593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.109634 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.109647 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.211834 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.211883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.211894 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.211910 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.211921 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.314991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.315053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.315064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.315079 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.315088 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.418128 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.418174 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.418183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.418200 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.418212 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.521015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.521056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.521067 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.521082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.521093 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.623777 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.623820 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.623832 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.623890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.623906 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.701684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.701741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.701756 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.701771 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.701779 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: E1129 07:16:10.712720 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:10Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.715973 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.716003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.716012 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.716024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.716032 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: E1129 07:16:10.727113 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:10Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.730890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.730930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.730938 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.730951 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.730961 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: E1129 07:16:10.742249 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:10Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.748005 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.748405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.748419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.748476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.748557 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: E1129 07:16:10.764573 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:10Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.768934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.768976 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.768988 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.769003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.769014 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: E1129 07:16:10.782935 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:10Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:10 crc kubenswrapper[4660]: E1129 07:16:10.783068 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.784645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.784668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.784676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.784689 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.784699 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.886410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.886435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.886445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.886458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.886467 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.988299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.988496 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.988515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.988534 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:10 crc kubenswrapper[4660]: I1129 07:16:10.988545 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:10Z","lastTransitionTime":"2025-11-29T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.091307 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.091352 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.091363 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.091380 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.091391 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.194124 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.194159 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.194168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.194186 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.194197 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.296442 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.296491 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.296509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.296531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.296548 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.398660 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.398709 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.398723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.398739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.398749 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.501531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.501574 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.501586 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.501630 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.501646 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.604187 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.604238 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.604249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.604269 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.604281 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.692534 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.692539 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.692541 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.692541 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:11 crc kubenswrapper[4660]: E1129 07:16:11.692703 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:11 crc kubenswrapper[4660]: E1129 07:16:11.692773 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:11 crc kubenswrapper[4660]: E1129 07:16:11.692844 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:11 crc kubenswrapper[4660]: E1129 07:16:11.692953 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.706411 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.706454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.706492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.706511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.706522 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.808698 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.808768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.808780 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.808796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.808807 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.910912 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.910955 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.910971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.910987 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:11 crc kubenswrapper[4660]: I1129 07:16:11.911002 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:11Z","lastTransitionTime":"2025-11-29T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.013408 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.013458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.013468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.013484 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.013495 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.116596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.116656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.116669 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.116685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.116695 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.219068 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.219126 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.219143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.219164 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.219176 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.321959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.321992 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.322002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.322019 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.322030 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.424745 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.424785 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.424797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.424825 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.424835 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.527772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.527811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.527842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.527860 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.527872 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.630109 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.630148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.630159 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.630202 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.630216 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.732212 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.732260 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.732272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.732292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.732304 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.834727 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.834764 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.834776 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.834789 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.834799 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.938893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.938934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.938943 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.938957 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:12 crc kubenswrapper[4660]: I1129 07:16:12.938969 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:12Z","lastTransitionTime":"2025-11-29T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.041217 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.041264 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.041276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.041293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.041304 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.143187 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.143237 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.143250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.143273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.143285 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.245393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.245422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.245435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.245449 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.245459 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.348431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.348465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.348475 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.348488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.348497 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.450980 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.451018 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.451028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.451042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.451051 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.553600 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.553656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.553666 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.553679 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.553690 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.655738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.655778 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.655786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.655801 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.655810 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.693493 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.693535 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:13 crc kubenswrapper[4660]: E1129 07:16:13.693930 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.693587 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:13 crc kubenswrapper[4660]: E1129 07:16:13.694055 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.693552 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:13 crc kubenswrapper[4660]: E1129 07:16:13.694481 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:13 crc kubenswrapper[4660]: E1129 07:16:13.694584 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.757688 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.757727 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.757735 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.757751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.757760 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.860455 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.860495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.860508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.860525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.860536 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.963044 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.963080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.963091 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.963108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:13 crc kubenswrapper[4660]: I1129 07:16:13.963121 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:13Z","lastTransitionTime":"2025-11-29T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.065959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.065993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.066003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.066017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.066029 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.167847 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.167882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.167890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.167902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.167912 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.269899 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.269944 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.269955 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.269971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.269983 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.371861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.371894 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.371903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.371916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.371925 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.474691 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.474727 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.474738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.474754 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.474765 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.576086 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.576124 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.576144 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.576161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.576171 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.678653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.678698 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.678710 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.678724 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.678735 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.781068 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.781109 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.781121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.781135 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.781155 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.883219 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.883255 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.883266 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.883281 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.883293 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.986293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.986339 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.986355 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.986371 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:14 crc kubenswrapper[4660]: I1129 07:16:14.986381 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:14Z","lastTransitionTime":"2025-11-29T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.089091 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.089140 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.089152 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.089170 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.089190 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.191459 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.191494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.191504 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.191519 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.191529 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.294273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.294307 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.294317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.294332 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.294344 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.396431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.396466 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.396476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.396492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.396503 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.499168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.499224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.499235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.499278 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.499288 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.601995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.602149 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.602165 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.602180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.602190 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.692821 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.692902 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:15 crc kubenswrapper[4660]: E1129 07:16:15.692996 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.693013 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.693077 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:15 crc kubenswrapper[4660]: E1129 07:16:15.693196 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:15 crc kubenswrapper[4660]: E1129 07:16:15.693264 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:15 crc kubenswrapper[4660]: E1129 07:16:15.693342 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.704487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.704531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.704543 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.704560 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.704571 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.807168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.807227 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.807243 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.807267 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.807283 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.910277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.910316 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.910329 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.910347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:15 crc kubenswrapper[4660]: I1129 07:16:15.910359 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:15Z","lastTransitionTime":"2025-11-29T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.012760 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.012804 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.012815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.012832 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.012845 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.114761 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.114795 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.114804 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.114817 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.114826 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.217871 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.217918 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.217930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.217947 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.217963 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.320317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.320357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.320370 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.320388 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.320402 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.422437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.422480 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.422491 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.422509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.422520 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.524330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.524387 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.524400 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.524441 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.524455 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.626924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.627006 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.627018 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.627062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.627076 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.729429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.729475 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.729486 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.729501 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.729512 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.832291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.832334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.832345 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.832362 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.832371 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.934669 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.934708 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.934720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.934736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:16 crc kubenswrapper[4660]: I1129 07:16:16.934746 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:16Z","lastTransitionTime":"2025-11-29T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.036905 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.036951 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.036963 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.036981 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.036995 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.139634 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.139676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.139686 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.139702 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.139714 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.241759 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.242661 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.242721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.242749 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.242765 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.345684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.345746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.345759 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.345774 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.345784 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.448578 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.448641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.448653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.448669 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.448680 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.550929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.550968 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.550980 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.550997 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.551008 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.653915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.653965 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.653977 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.653994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.654008 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.693198 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.693276 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.693284 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.693395 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:17 crc kubenswrapper[4660]: E1129 07:16:17.693392 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:17 crc kubenswrapper[4660]: E1129 07:16:17.693486 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:17 crc kubenswrapper[4660]: E1129 07:16:17.693547 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:17 crc kubenswrapper[4660]: E1129 07:16:17.693801 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.705257 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.756064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.756100 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.756110 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.756125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.756134 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.858390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.858445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.858455 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.858470 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.858481 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.960966 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.961036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.961049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.961066 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:17 crc kubenswrapper[4660]: I1129 07:16:17.961077 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:17Z","lastTransitionTime":"2025-11-29T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.063314 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.063361 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.063373 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.063389 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.063401 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.165361 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.165389 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.165397 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.165409 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.165417 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.267575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.267876 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.267885 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.267897 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.267907 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.370630 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.370659 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.370668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.370684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.370695 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.473404 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.473452 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.473465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.473482 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.473494 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.529231 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/0.log" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.529284 4660 generic.go:334] "Generic (PLEG): container finished" podID="e71cb583-cccf-4345-8695-0d3a6c237a35" containerID="a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3" exitCode=1 Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.529877 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerDied","Data":"a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.530178 4660 scope.go:117] "RemoveContainer" containerID="a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.559185 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.573124 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.578062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.578089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.578099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.578113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.578125 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.588309 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.602396 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.642505 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.680074 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.680115 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.680125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.680144 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.680157 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.697024 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.711083 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.724459 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.735316 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.748760 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.762835 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.782064 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.782301 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.782349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.782363 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.782380 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.782391 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.795214 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.810222 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.822981 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.835284 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.856710 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.870882 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.884336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.884367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.884376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.884390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.884400 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.986999 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.987038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.987046 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.987062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:18 crc kubenswrapper[4660]: I1129 07:16:18.987072 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:18Z","lastTransitionTime":"2025-11-29T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.089009 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.089070 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.089082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.089098 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.089111 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.191431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.191472 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.191484 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.191503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.191513 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.294428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.294496 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.294530 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.294564 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.294589 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.397370 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.397473 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.397485 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.397506 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.397518 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.499656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.499687 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.499698 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.499714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.499726 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.534149 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/0.log" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.534200 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerStarted","Data":"f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.547918 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.559112 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.569893 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.582571 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.594135 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.602321 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.602366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.602377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.602394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.602406 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.605810 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.617356 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.629352 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.642463 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.652532 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.666305 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.678942 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.692653 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.692653 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.692716 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.692772 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:19 crc kubenswrapper[4660]: E1129 07:16:19.692903 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:19 crc kubenswrapper[4660]: E1129 07:16:19.693005 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:19 crc kubenswrapper[4660]: E1129 07:16:19.693104 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:19 crc kubenswrapper[4660]: E1129 07:16:19.693451 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.693746 4660 scope.go:117] "RemoveContainer" containerID="8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85" Nov 29 07:16:19 crc kubenswrapper[4660]: E1129 07:16:19.693889 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.694692 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.703950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.703982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.703990 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.704003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.704031 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.716918 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.728433 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.739415 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.756090 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.767915 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.778987 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.788729 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.802213 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.806208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.806234 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.806244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.806258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.806266 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.814164 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.827504 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.842859 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.855771 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.867527 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.880228 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.895759 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.908346 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.908431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.908444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.908461 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.908472 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:19Z","lastTransitionTime":"2025-11-29T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.908748 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.923437 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.938151 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.952534 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.965116 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.975810 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:19 crc kubenswrapper[4660]: I1129 07:16:19.987833 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:19Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.010313 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.011135 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.011195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.011210 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.011226 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.011237 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.113643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.113693 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.113703 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.113721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.113730 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.216022 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.216056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.216066 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.216080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.216092 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.318668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.318724 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.318744 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.319072 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.319106 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.422091 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.422129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.422172 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.422189 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.422198 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.526478 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.526527 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.526539 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.526559 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.526572 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.628526 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.628583 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.628595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.628637 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.628650 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.731843 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.731889 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.731909 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.731940 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.731955 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.834451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.834515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.834530 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.834549 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.834562 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.861358 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.861421 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.861435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.861464 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.861479 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: E1129 07:16:20.878012 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.882985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.883049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.883063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.883087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.883104 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: E1129 07:16:20.898296 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.901685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.901728 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.901739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.901759 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.901770 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: E1129 07:16:20.913595 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.917135 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.917170 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.917180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.917195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.917204 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: E1129 07:16:20.929591 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.934242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.934290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.934299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.934317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.934329 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:20 crc kubenswrapper[4660]: E1129 07:16:20.947904 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:20Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:20 crc kubenswrapper[4660]: E1129 07:16:20.948036 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.949670 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.949733 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.949746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.949767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:20 crc kubenswrapper[4660]: I1129 07:16:20.949780 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:20Z","lastTransitionTime":"2025-11-29T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.051788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.051828 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.051841 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.051857 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.051870 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.154018 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.154058 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.154068 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.154084 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.154095 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.256297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.256335 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.256347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.256364 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.256374 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.359386 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.359427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.359440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.359458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.359472 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.461660 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.461696 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.461706 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.461926 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.461952 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.564033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.564077 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.564096 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.564120 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.564137 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.667319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.667357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.667367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.667383 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.667396 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.692632 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.692667 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.692667 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:21 crc kubenswrapper[4660]: E1129 07:16:21.692805 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:21 crc kubenswrapper[4660]: E1129 07:16:21.692959 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.692991 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:21 crc kubenswrapper[4660]: E1129 07:16:21.693020 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:21 crc kubenswrapper[4660]: E1129 07:16:21.693094 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.770670 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.770766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.770789 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.770817 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.770840 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.874290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.874354 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.874376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.874403 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.874423 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.977085 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.977132 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.977160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.977178 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:21 crc kubenswrapper[4660]: I1129 07:16:21.977190 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:21Z","lastTransitionTime":"2025-11-29T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.079409 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.079438 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.079446 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.079458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.079466 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.181733 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.181781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.181793 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.181810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.181822 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.283934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.283974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.283986 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.284001 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.284013 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.387205 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.387247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.387259 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.387276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.387288 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.489999 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.490041 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.490049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.490061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.490070 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.592579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.592668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.592685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.592701 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.592715 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.695465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.695516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.695526 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.695543 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.695555 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.798770 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.798810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.798821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.798838 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.798850 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.901707 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.901776 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.901814 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.901831 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:22 crc kubenswrapper[4660]: I1129 07:16:22.901844 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:22Z","lastTransitionTime":"2025-11-29T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.005422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.005474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.005486 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.005503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.005517 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.108328 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.108408 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.108420 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.108487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.108510 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.210872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.210916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.210925 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.210940 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.210951 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.312708 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.312782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.312807 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.312836 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.312863 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.414555 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.414621 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.414632 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.414645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.414669 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.516342 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.516372 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.516382 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.516398 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.516409 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.621642 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.621691 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.621707 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.621727 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.621740 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.693412 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.693488 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.693511 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:23 crc kubenswrapper[4660]: E1129 07:16:23.693550 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.693500 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:23 crc kubenswrapper[4660]: E1129 07:16:23.693659 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:23 crc kubenswrapper[4660]: E1129 07:16:23.693715 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:23 crc kubenswrapper[4660]: E1129 07:16:23.693827 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.724430 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.724461 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.724470 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.724484 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.724493 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.826669 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.826740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.826754 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.826768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.826778 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.928368 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.928397 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.928405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.928418 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:23 crc kubenswrapper[4660]: I1129 07:16:23.928426 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:23Z","lastTransitionTime":"2025-11-29T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.030881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.030918 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.030928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.030942 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.030951 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.133383 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.133418 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.133433 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.133450 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.133462 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.235976 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.236021 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.236034 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.236052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.236065 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.339643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.339693 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.339705 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.339722 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.339734 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.441766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.441829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.441850 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.441873 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.441890 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.544376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.544427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.544439 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.544458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.544469 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.647528 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.647560 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.647569 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.647581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.647589 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.750211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.750241 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.750249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.750262 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.750270 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.852468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.852511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.852527 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.852547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.852558 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.955079 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.955160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.955169 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.955184 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:24 crc kubenswrapper[4660]: I1129 07:16:24.955193 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:24Z","lastTransitionTime":"2025-11-29T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.057437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.057477 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.057487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.057499 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.057508 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.160169 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.160192 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.160199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.160211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.160220 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.267025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.267083 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.267097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.267118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.267130 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.369788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.369833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.369847 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.369865 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.369879 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.472783 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.472845 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.472869 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.472928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.472984 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.575557 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.575602 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.575627 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.575641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.575650 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.677678 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.677721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.677732 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.677748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.677765 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.693263 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.693319 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.693287 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.693272 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:25 crc kubenswrapper[4660]: E1129 07:16:25.693413 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:25 crc kubenswrapper[4660]: E1129 07:16:25.693520 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:25 crc kubenswrapper[4660]: E1129 07:16:25.693654 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:25 crc kubenswrapper[4660]: E1129 07:16:25.693721 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.780344 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.780384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.780395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.780410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.780421 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.882334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.882383 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.882395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.882411 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.882422 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.984924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.984973 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.984985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.985000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:25 crc kubenswrapper[4660]: I1129 07:16:25.985012 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:25Z","lastTransitionTime":"2025-11-29T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.087083 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.087148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.087162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.087180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.087192 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.190261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.190307 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.190318 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.190334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.190346 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.292993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.293025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.293032 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.293045 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.293054 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.394946 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.394987 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.394996 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.395010 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.395019 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.497738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.497773 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.497781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.497795 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.497803 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.599642 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.599686 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.599696 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.599725 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.599738 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.702225 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.702258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.702274 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.702290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.702300 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.804777 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.804818 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.804827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.804842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.804852 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.907187 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.907227 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.907239 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.907255 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:26 crc kubenswrapper[4660]: I1129 07:16:26.907265 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:26Z","lastTransitionTime":"2025-11-29T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.009375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.009443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.009455 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.009471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.009481 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.112144 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.112180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.112191 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.112207 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.112219 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.214685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.214732 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.214746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.214763 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.214776 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.316912 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.317034 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.317078 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.317099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.317109 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.420734 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.420781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.420796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.420813 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.420826 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.523454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.523516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.523587 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.523678 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.523691 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.626350 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.626388 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.626397 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.626412 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.626421 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.693215 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.693288 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.693326 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.693260 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:27 crc kubenswrapper[4660]: E1129 07:16:27.693422 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:27 crc kubenswrapper[4660]: E1129 07:16:27.693531 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:27 crc kubenswrapper[4660]: E1129 07:16:27.693677 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:27 crc kubenswrapper[4660]: E1129 07:16:27.693729 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.729165 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.729233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.729254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.729282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.729303 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.831357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.831403 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.831414 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.831431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.831442 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.934236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.934288 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.934325 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.934341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:27 crc kubenswrapper[4660]: I1129 07:16:27.934351 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:27Z","lastTransitionTime":"2025-11-29T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.037674 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.037714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.037723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.037769 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.037780 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.140094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.140134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.140146 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.140162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.140172 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.242327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.242366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.242376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.242389 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.242398 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.345271 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.345317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.345334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.345351 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.345363 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.447718 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.447748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.447757 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.447768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.447777 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.550898 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.550942 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.550954 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.550972 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.550984 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.653515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.653565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.653577 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.653593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.653603 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.757175 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.757230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.757243 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.757260 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.757289 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.860073 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.860124 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.860160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.860177 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.860190 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.962908 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.962939 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.962948 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.962962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:28 crc kubenswrapper[4660]: I1129 07:16:28.962973 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:28Z","lastTransitionTime":"2025-11-29T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.065338 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.065397 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.065412 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.065433 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.065496 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.168744 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.168816 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.168836 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.168865 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.168886 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.271810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.271851 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.271861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.271882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.271891 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.375050 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.375108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.375120 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.375140 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.375156 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.478553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.478668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.478684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.478708 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.478722 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.581344 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.581402 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.581416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.581439 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.581455 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.684446 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.684479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.684487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.684500 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.684512 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.692708 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.692748 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.692760 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.692708 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:29 crc kubenswrapper[4660]: E1129 07:16:29.692824 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:29 crc kubenswrapper[4660]: E1129 07:16:29.692940 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:29 crc kubenswrapper[4660]: E1129 07:16:29.692974 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:29 crc kubenswrapper[4660]: E1129 07:16:29.693036 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.704301 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.721015 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.737680 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.747834 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.763041 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.774314 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.787866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.787945 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.787958 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.787976 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.788015 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.789586 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.807982 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.823912 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.838265 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.853000 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.862015 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.872645 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.882256 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.890063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.890095 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.890106 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.890123 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.890134 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.894113 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.905742 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.915235 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.930276 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.991945 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.991973 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.991982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.991995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:29 crc kubenswrapper[4660]: I1129 07:16:29.992004 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:29Z","lastTransitionTime":"2025-11-29T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.094729 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.094797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.094811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.094833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.094848 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.198129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.198189 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.198213 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.198243 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.198264 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.301071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.301116 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.301134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.301155 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.301171 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.403817 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.403855 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.403864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.403879 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.403888 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.507061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.507111 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.507125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.507143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.507190 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.609643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.609700 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.609717 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.609739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.609756 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.713073 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.713121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.713134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.713152 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.713164 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.815332 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.815391 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.815410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.815435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.815455 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.918700 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.918755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.918778 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.918807 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:30 crc kubenswrapper[4660]: I1129 07:16:30.918824 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:30Z","lastTransitionTime":"2025-11-29T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.021440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.021494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.021510 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.021531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.021547 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.123936 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.123978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.123989 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.124007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.124019 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.226153 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.226184 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.226194 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.226209 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.226218 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.241779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.241842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.241865 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.241896 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.241920 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.262763 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.266732 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.266773 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.266786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.266804 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.266816 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.281495 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.285880 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.285915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.285924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.285938 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.285949 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.299645 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.303366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.303406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.303416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.303430 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.303459 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.316600 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.320323 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.320366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.320380 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.320401 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.320417 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.335880 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"168d3329-d7ae-441d-bd3b-eaf0cacb1014\\\",\\\"systemUUID\\\":\\\"e8ec79b4-9420-428e-820e-3d546f24f945\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.336040 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.337746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.337793 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.337801 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.337824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.337842 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.439898 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.439932 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.439945 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.439962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.439974 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.542183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.542249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.542272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.542299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.542322 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.645504 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.645536 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.645545 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.645556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.645565 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.693366 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.693424 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.693519 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.693366 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.693637 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.693683 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.693728 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:31 crc kubenswrapper[4660]: E1129 07:16:31.693787 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.747457 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.747491 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.747503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.747516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.747525 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.850249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.850324 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.850349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.850381 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.850401 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.953394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.953428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.953439 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.953454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:31 crc kubenswrapper[4660]: I1129 07:16:31.953464 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:31Z","lastTransitionTime":"2025-11-29T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.055660 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.055690 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.055697 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.055711 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.055720 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.158320 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.158373 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.158384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.158400 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.158411 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.261217 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.261260 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.261271 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.261290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.261303 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.364166 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.364226 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.364242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.364267 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.364285 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.467292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.467353 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.467374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.467401 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.467418 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.570083 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.570124 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.570132 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.570148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.570158 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.672929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.672982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.672998 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.673020 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.673037 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.775787 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.775830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.775843 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.775859 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.775869 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.877936 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.878004 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.878049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.878075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.878094 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.981327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.981374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.981387 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.981405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:32 crc kubenswrapper[4660]: I1129 07:16:32.981424 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:32Z","lastTransitionTime":"2025-11-29T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.084787 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.084835 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.084844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.084858 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.084868 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.187402 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.187446 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.187458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.187475 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.187487 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.290066 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.290101 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.290112 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.290126 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.290138 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.398475 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.398551 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.398572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.398633 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.398657 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.501908 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.502075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.502164 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.502199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.502300 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.518659 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.518939 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:37.518915248 +0000 UTC m=+148.072445187 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.605197 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.605230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.605240 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.605256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.605266 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.619921 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.619973 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.619999 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.620037 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620060 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620136 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620182 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620188 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620209 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620213 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620148 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:17:37.62012021 +0000 UTC m=+148.173650139 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620233 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620251 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620283 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-29 07:17:37.620258193 +0000 UTC m=+148.173788172 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620345 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-29 07:17:37.620314485 +0000 UTC m=+148.173844424 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.620376 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-29 07:17:37.620361196 +0000 UTC m=+148.173891125 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.693147 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.693170 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.693221 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.693590 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.693727 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.693798 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.693867 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:33 crc kubenswrapper[4660]: E1129 07:16:33.693928 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.710163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.710193 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.710201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.710241 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.710253 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.710664 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.813300 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.813363 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.813383 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.813412 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.813436 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.916473 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.916541 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.916564 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.916594 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:33 crc kubenswrapper[4660]: I1129 07:16:33.916648 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:33Z","lastTransitionTime":"2025-11-29T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.018927 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.018995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.019014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.019037 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.019055 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.122306 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.122344 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.122409 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.122426 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.122465 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.185205 4660 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.304828288s: [/var/lib/containers/storage/overlay/06312fba170ae265625ca48a7743d0604fa01d74a05781f43e392788a637139a/diff /var/log/pods/openshift-machine-config-operator_machine-config-daemon-bjw9w_0f4a7492-b946-4db3-b301-0b860ed7cce1/machine-config-daemon/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.185575 4660 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.578168936s: [/var/lib/containers/storage/overlay/cb92fe01c113c923edfd38c09d84d4ee104091f22401fadb04887a8b7b8e922a/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/kube-rbac-proxy-ovn-metrics/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.185875 4660 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.660147247s: [/var/lib/containers/storage/overlay/9e693d08d58098c16d3645a1ad1b61a97c80da54134e1e255f2e7942a64c98e3/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/kube-rbac-proxy-node/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.187960 4660 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.498704577s: [/var/lib/containers/storage/overlay/e38e9b9948a9c0ba0bd02a0cc4713a5b2887f5db451b1ea81929a43f70b18e79/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/northd/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.226737 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.226799 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.226812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.226832 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.226843 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.330231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.330287 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.330299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.330321 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.330335 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.432649 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.432701 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.432712 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.432730 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.432741 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.536387 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.536444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.536457 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.536482 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.536497 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.639818 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.639869 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.639882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.639904 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.639915 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.694841 4660 scope.go:117] "RemoveContainer" containerID="8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.743550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.744273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.744363 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.744476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.744570 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.847060 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.847316 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.847556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.847684 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.847765 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.950060 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.950093 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.950102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.950115 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:34 crc kubenswrapper[4660]: I1129 07:16:34.950125 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:34Z","lastTransitionTime":"2025-11-29T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.052955 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.053003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.053015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.053033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.053049 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.155705 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.155742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.155750 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.155763 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.155772 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.259987 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.260065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.260100 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.260119 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.260128 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.362939 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.362982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.362998 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.363020 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.363037 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.466214 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.466290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.466308 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.466334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.466354 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.542961 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:35 crc kubenswrapper[4660]: E1129 07:16:35.543127 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:16:35 crc kubenswrapper[4660]: E1129 07:16:35.543189 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs podName:58b9294e-0d4f-4671-b4ad-513b428cc45d nodeName:}" failed. No retries permitted until 2025-11-29 07:17:39.543173422 +0000 UTC m=+150.096703321 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs") pod "network-metrics-daemon-xvjdn" (UID: "58b9294e-0d4f-4671-b4ad-513b428cc45d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.569575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.569668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.569687 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.569711 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.569727 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.672745 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.672820 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.672843 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.672872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.672896 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.693511 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.693576 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.693515 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:35 crc kubenswrapper[4660]: E1129 07:16:35.693734 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.693812 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:35 crc kubenswrapper[4660]: E1129 07:16:35.693998 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:35 crc kubenswrapper[4660]: E1129 07:16:35.694206 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:35 crc kubenswrapper[4660]: E1129 07:16:35.694390 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.776382 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.776479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.776497 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.776519 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.776536 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.879573 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.879629 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.879641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.879658 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.879671 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.982195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.982231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.982240 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.982254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:35 crc kubenswrapper[4660]: I1129 07:16:35.982264 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:35Z","lastTransitionTime":"2025-11-29T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.084441 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.084479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.084493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.084507 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.084516 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.186994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.187030 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.187038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.187054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.187063 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.289353 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.289389 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.289402 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.289417 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.289427 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.391572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.391672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.391690 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.391713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.391730 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.493864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.493911 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.493928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.493943 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.493955 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.589973 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/2.log" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.592363 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.593328 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.595909 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.595968 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.595980 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.595997 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.596017 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.606939 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.620396 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.635189 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.648353 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.665682 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b36ea27-63b8-41f9-bc63-0ece621dc0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a524d037d1390427673fa9698643411c3902595e04e84a84603afc5bbf79d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1b0d3c72a569c203641619285fe61ba7274e3fa33c4fc6662fc99c35cf551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d3f116580df6a2a6b9322cb2a081b2b1a4feb63454e859b0e3f5145f8b7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdbb82f4863a742b1c19fe5f3ac11f0712f113716e0e70dc29abc0aef258417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82e441855edd7e07e285e91535af7db0b9995acf6e286ee4ba991fbde7af4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.676306 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.691911 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.698296 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.698355 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.698365 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.698377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.698385 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.702781 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.713620 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.725141 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.738048 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.752525 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.763475 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.775817 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.787998 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.800018 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.800260 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.800290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.800299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.800322 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.800333 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.811638 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.826696 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.837742 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.902919 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.902958 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.902968 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.902983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:36 crc kubenswrapper[4660]: I1129 07:16:36.902994 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:36Z","lastTransitionTime":"2025-11-29T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.005368 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.005433 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.005452 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.005474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.005490 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.108081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.108166 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.108200 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.108227 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.108250 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.211264 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.211327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.211350 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.211379 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.211402 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.314769 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.314835 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.314853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.314877 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.314894 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.418185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.418253 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.418273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.418299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.418317 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.520494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.520581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.520605 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.520676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.520700 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.598333 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/3.log" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.599132 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/2.log" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.602544 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" exitCode=1 Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.602673 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.602760 4660 scope.go:117] "RemoveContainer" containerID="8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.605076 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:16:37 crc kubenswrapper[4660]: E1129 07:16:37.605689 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.620959 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.623580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.623670 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.623685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.623706 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.623721 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.633331 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.644804 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.659678 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.679035 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b36ea27-63b8-41f9-bc63-0ece621dc0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a524d037d1390427673fa9698643411c3902595e04e84a84603afc5bbf79d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1b0d3c72a569c203641619285fe61ba7274e3fa33c4fc6662fc99c35cf551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d3f116580df6a2a6b9322cb2a081b2b1a4feb63454e859b0e3f5145f8b7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdbb82f4863a742b1c19fe5f3ac11f0712f113716e0e70dc29abc0aef258417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82e441855edd7e07e285e91535af7db0b9995acf6e286ee4ba991fbde7af4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.692843 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.692946 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:37 crc kubenswrapper[4660]: E1129 07:16:37.693080 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.693330 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:37 crc kubenswrapper[4660]: E1129 07:16:37.693417 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.693561 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:37 crc kubenswrapper[4660]: E1129 07:16:37.693646 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.693763 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:37 crc kubenswrapper[4660]: E1129 07:16:37.693831 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.711382 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f7ae18dd2873d1174deacbccf667ce41066c5607ff7090cf95d76deeae77f85\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"message\\\":\\\"ll/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.613745 6234 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.613899 6234 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.614233 6234 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:05.614362 6234 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:05.614797 6234 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:05.615236 6234 factory.go:656] Stopping watch factory\\\\nI1129 07:16:05.631563 6234 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1129 07:16:05.631589 6234 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1129 07:16:05.631683 6234 ovnkube.go:599] Stopped ovnkube\\\\nI1129 07:16:05.631708 6234 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1129 07:16:05.631785 6234 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:37Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.580988 6619 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:36.581029 6619 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581049 6619 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581101 6619 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:36.581286 6619 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:36.581441 6619 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581688 6619 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581875 6619 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.720958 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.726792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.726829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.726837 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.726861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.726871 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.733892 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.754850 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.768869 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.779252 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.787668 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.799828 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.808786 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.818207 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.829284 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.829319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.829331 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.829347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.829361 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.835787 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.847082 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.858205 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.931792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.931840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.931851 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.931867 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:37 crc kubenswrapper[4660]: I1129 07:16:37.931877 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:37Z","lastTransitionTime":"2025-11-29T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.034380 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.034424 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.034435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.034449 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.034458 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.137096 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.137186 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.137216 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.137248 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.137269 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.240024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.240065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.240076 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.240091 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.240101 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.343206 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.343273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.343292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.343317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.343336 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.445113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.445166 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.445182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.445204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.445225 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.547843 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.547934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.547953 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.547975 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.547993 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.608917 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/3.log" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.616342 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:16:38 crc kubenswrapper[4660]: E1129 07:16:38.617444 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.639418 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.650366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.650418 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.650435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.650458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.650476 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.656894 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.671109 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.683917 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.701055 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.717821 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.733278 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.748556 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.753804 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.753907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.753933 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.754020 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.754097 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.761998 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.772070 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.782760 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.796788 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.811450 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.828940 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.845175 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.856599 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.856651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.856663 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.856679 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.856691 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.858912 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.872131 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.891095 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:37Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.580988 6619 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:36.581029 6619 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581049 6619 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581101 6619 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:36.581286 6619 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:36.581441 6619 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581688 6619 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581875 6619 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.910118 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b36ea27-63b8-41f9-bc63-0ece621dc0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a524d037d1390427673fa9698643411c3902595e04e84a84603afc5bbf79d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1b0d3c72a569c203641619285fe61ba7274e3fa33c4fc6662fc99c35cf551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d3f116580df6a2a6b9322cb2a081b2b1a4feb63454e859b0e3f5145f8b7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdbb82f4863a742b1c19fe5f3ac11f0712f113716e0e70dc29abc0aef258417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82e441855edd7e07e285e91535af7db0b9995acf6e286ee4ba991fbde7af4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:38Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.958910 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.958979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.959002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.959032 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:38 crc kubenswrapper[4660]: I1129 07:16:38.959053 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:38Z","lastTransitionTime":"2025-11-29T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.060804 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.060855 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.060868 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.060883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.060895 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.163809 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.163856 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.163868 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.163889 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.163900 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.266106 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.266143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.266154 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.266171 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.266183 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.369050 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.369088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.369102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.369121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.369131 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.472258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.472302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.472314 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.472329 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.472342 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.574602 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.574770 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.574783 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.574797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.574809 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.678003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.678070 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.678088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.678116 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.678135 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.693049 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:39 crc kubenswrapper[4660]: E1129 07:16:39.693199 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.693071 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.693254 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:39 crc kubenswrapper[4660]: E1129 07:16:39.693289 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.693045 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:39 crc kubenswrapper[4660]: E1129 07:16:39.693403 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:39 crc kubenswrapper[4660]: E1129 07:16:39.693490 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.705174 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27156694-e54f-4a8c-8c99-9a044aef4cb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ceeab7d119f7ca520c1f8ec79f93e873ada960da4c45e41d8c8d4d2adca9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28cd78ec2be8010df0294689d4d2187c47723910b6a608ebf6ac9bc40f012c2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.719898 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36d7eced197c5bf6cc4b9c5c67b5281f0ef4d2016b32845ea33fccba18017a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.735066 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b809fb66cb174ce3d47d42db53f16cb739b014b8d2c9f03ab33857079628ff8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8091333b3e0fa910229c34c60502621ec8c28985a3ee72689614fb60ebbe4ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.747137 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b9294e-0d4f-4671-b4ad-513b428cc45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnm7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xvjdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.758249 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24bac20d-6112-403d-b98d-dfe5b13913d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a891903fb7f669be6edd03bc07c1ef831da1b60673f40019e0f44ed7e870d136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea9f4c7038664c38234efbb2d1c9e527e916af6ac66443351bfa0716f670a5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cts6d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-msq74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.771205 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73398adb-2c45-4f24-9e89-3cc192b80d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.779984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.780022 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.780031 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.780046 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.780055 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.783373 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7c1702d-7c41-46f6-b46c-e535f9d25fa6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5de2c0a4516d8a2c45e513e9d883bba4c8c364358ef80fc3c215d7c5890d8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef262794e74125d4e6f5488f9b5e2fd48436088bb6ba56b5d6242a09d34c3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee340e55a0523d0fca7ca9e92b5edae5b4e3e643fbb8d38f5b9a8e3a09c9f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c83b1e1cfa9b1412aca7e25075c87ea4cc467f5c4c3553d665a03d6deeba7146\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.801525 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.813576 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d705f50be97749b32802b630dfb7efcf55c338056e6fd5e2b300d1ca3c48ddd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.824360 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqtc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7fd3a7-a7ba-4231-92bc-accc35c6d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77993c34f1db5059ebfdacb5b8a71d809c47f8eccad9a83dd8e9fea0190184d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhsz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqtc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.836078 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f4a7492-b946-4db3-b301-0b860ed7cce1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8689ee42b58c522ff3d0432f80975ed509a368aae79cb519e425215b8bfe257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5sjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bjw9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.846856 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-689qx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c27831a3-624c-4e2a-80d5-f40e47f79e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77697f221a8b4542ba0fb851371f331d0f9a026d15fcf6392bf047851db379a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-689qx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.860504 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdc9e6a5-1324-4a4c-b5b6-809ff529c301\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348eab399bc6844f71d30e5df331b483915db074bbbb1159dc6170a98890564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b875911bc55b5007c326045579082deff3b97fbf4f0098f4540c838d43bd8499\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0bedffbb5831a2da2a9d0f8ed6e54c693987b4bf0236da23426d7a86242b74b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.874193 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.882402 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.882441 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.882451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.882466 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.882476 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.889553 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99mtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e71cb583-cccf-4345-8695-0d3a6c237a35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:16:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:18Z\\\",\\\"message\\\":\\\"2025-11-29T07:15:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4\\\\n2025-11-29T07:15:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bb08be03-b2cb-4461-a630-43a79cd160c4 to /host/opt/cni/bin/\\\\n2025-11-29T07:15:33Z [verbose] multus-daemon started\\\\n2025-11-29T07:15:33Z [verbose] Readiness Indicator file check\\\\n2025-11-29T07:16:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:16:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4v4h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99mtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.906231 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ca2e94-4023-4f1d-a2bd-0b990aa9c128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0561aee1f6ad9de2a8f41484a7519906016fdd8a61ce17dbec14083bcf9ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594d2c40212024f4c61d9a378f24f6cd2c8c4ddbae236dc99003b82788050f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e3978dffaefe35bb62765052adc10ce41d8990e4d7759a67b89a15bde85d457\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71fe9367607dca93aeeccad4358411022a668756beec6aa966c50609c6462201\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0fa36fb242b72150f91f08c4fe606c9d7ff7861382c3c9d5fba970faa486ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff15b894a20cf0d4fac4e16fc7ec93549d94eb64d79e88f64d308e06bf6e4dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27bb9ce6d630a6b06de264b40688e92ead5cab374758b6ba8a11a131d69fa79e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g8fkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.925302 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b36ea27-63b8-41f9-bc63-0ece621dc0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a524d037d1390427673fa9698643411c3902595e04e84a84603afc5bbf79d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1b0d3c72a569c203641619285fe61ba7274e3fa33c4fc6662fc99c35cf551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d3f116580df6a2a6b9322cb2a081b2b1a4feb63454e859b0e3f5145f8b7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdbb82f4863a742b1c19fe5f3ac11f0712f113716e0e70dc29abc0aef258417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e82e441855edd7e07e285e91535af7db0b9995acf6e286ee4ba991fbde7af4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://501b89a5f82b8583820415704fd389b420076efada02f6c3f664eafa1ea959ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b726389547b6506dd1d74d0546504d1361c7b093b28da1488d6ea92f118cb0c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81bdb2a1cf83a2b1d5bca42486d17545534bd2313634cf3ebaa224b66a89b4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.938506 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.964787 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01aa307a-c2ec-4ded-8677-da549fbfba76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-29T07:15:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-29T07:16:37Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.580988 6619 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:36.581029 6619 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581049 6619 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581101 6619 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1129 07:16:36.581286 6619 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1129 07:16:36.581441 6619 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581688 6619 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1129 07:16:36.581875 6619 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-29T07:16:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-29T07:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-29T07:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-29T07:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szm8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-29T07:15:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgvps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-29T07:16:39Z is after 2025-08-24T17:21:41Z" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.984501 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.984550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.984559 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.984575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:39 crc kubenswrapper[4660]: I1129 07:16:39.984583 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:39Z","lastTransitionTime":"2025-11-29T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.086781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.086821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.086830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.086845 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.086856 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.189168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.189211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.189226 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.189242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.189256 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.291450 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.291476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.291492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.291510 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.291522 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.393853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.393919 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.393931 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.393950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.393964 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.496881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.496953 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.496962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.496979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.496991 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.599884 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.599993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.600053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.600076 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.600093 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.702503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.702539 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.702550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.702571 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.702599 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.805061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.805112 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.805125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.805172 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.805185 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.907693 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.907732 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.907749 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.907769 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:40 crc kubenswrapper[4660]: I1129 07:16:40.907782 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:40Z","lastTransitionTime":"2025-11-29T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.010171 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.010213 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.010226 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.010297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.010313 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:41Z","lastTransitionTime":"2025-11-29T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.114013 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.114067 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.114083 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.114108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.114127 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:41Z","lastTransitionTime":"2025-11-29T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.217050 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.217085 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.217094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.217133 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.217178 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:41Z","lastTransitionTime":"2025-11-29T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.319580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.319643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.319659 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.319681 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.319695 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:41Z","lastTransitionTime":"2025-11-29T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.422741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.422811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.422835 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.422865 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.422889 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:41Z","lastTransitionTime":"2025-11-29T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.525129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.525249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.525274 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.525302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.525323 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:41Z","lastTransitionTime":"2025-11-29T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.562988 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.563048 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.563065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.563090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.563109 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T07:16:41Z","lastTransitionTime":"2025-11-29T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.598303 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm"] Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.599023 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.601514 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.601686 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.601751 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.604047 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.622843 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.62282717 podStartE2EDuration="1m12.62282717s" podCreationTimestamp="2025-11-29 07:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.622547492 +0000 UTC m=+92.176077441" watchObservedRunningTime="2025-11-29 07:16:41.62282717 +0000 UTC m=+92.176357069" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.652445 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.652421225 podStartE2EDuration="35.652421225s" podCreationTimestamp="2025-11-29 07:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.639454787 +0000 UTC m=+92.192984726" watchObservedRunningTime="2025-11-29 07:16:41.652421225 +0000 UTC m=+92.205951124" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.682531 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-sqtc9" podStartSLOduration=72.682512023 podStartE2EDuration="1m12.682512023s" podCreationTimestamp="2025-11-29 07:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.68195515 +0000 UTC m=+92.235485049" watchObservedRunningTime="2025-11-29 07:16:41.682512023 +0000 UTC m=+92.236041922" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.692923 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.692970 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.692987 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:41 crc kubenswrapper[4660]: E1129 07:16:41.693082 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.693106 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:41 crc kubenswrapper[4660]: E1129 07:16:41.693187 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:41 crc kubenswrapper[4660]: E1129 07:16:41.693258 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:41 crc kubenswrapper[4660]: E1129 07:16:41.693298 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.694453 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podStartSLOduration=70.694435086 podStartE2EDuration="1m10.694435086s" podCreationTimestamp="2025-11-29 07:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.693735018 +0000 UTC m=+92.247264917" watchObservedRunningTime="2025-11-29 07:16:41.694435086 +0000 UTC m=+92.247964985" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.707582 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6bbd6752-a525-4feb-a959-4871a4495cc8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.707630 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bbd6752-a525-4feb-a959-4871a4495cc8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.707659 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bbd6752-a525-4feb-a959-4871a4495cc8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.707694 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bbd6752-a525-4feb-a959-4871a4495cc8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.707731 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6bbd6752-a525-4feb-a959-4871a4495cc8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.715828 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-689qx" podStartSLOduration=69.71580562 podStartE2EDuration="1m9.71580562s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.704365379 +0000 UTC m=+92.257895278" watchObservedRunningTime="2025-11-29 07:16:41.71580562 +0000 UTC m=+92.269335519" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.729461 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-msq74" podStartSLOduration=69.729445914 podStartE2EDuration="1m9.729445914s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.716033075 +0000 UTC m=+92.269562974" watchObservedRunningTime="2025-11-29 07:16:41.729445914 +0000 UTC m=+92.282975813" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.729697 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.72969246 podStartE2EDuration="1m12.72969246s" podCreationTimestamp="2025-11-29 07:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.729547957 +0000 UTC m=+92.283077856" watchObservedRunningTime="2025-11-29 07:16:41.72969246 +0000 UTC m=+92.283222359" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.772832 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-99mtq" podStartSLOduration=70.772814668 podStartE2EDuration="1m10.772814668s" podCreationTimestamp="2025-11-29 07:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.75619891 +0000 UTC m=+92.309728819" watchObservedRunningTime="2025-11-29 07:16:41.772814668 +0000 UTC m=+92.326344557" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.773270 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-g8fkc" podStartSLOduration=70.773265849 podStartE2EDuration="1m10.773265849s" podCreationTimestamp="2025-11-29 07:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.773169686 +0000 UTC m=+92.326699585" watchObservedRunningTime="2025-11-29 07:16:41.773265849 +0000 UTC m=+92.326795748" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.808534 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bbd6752-a525-4feb-a959-4871a4495cc8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.808649 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bbd6752-a525-4feb-a959-4871a4495cc8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.808708 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6bbd6752-a525-4feb-a959-4871a4495cc8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.808745 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6bbd6752-a525-4feb-a959-4871a4495cc8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.808780 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bbd6752-a525-4feb-a959-4871a4495cc8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.810143 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6bbd6752-a525-4feb-a959-4871a4495cc8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.810273 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6bbd6752-a525-4feb-a959-4871a4495cc8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.810732 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bbd6752-a525-4feb-a959-4871a4495cc8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.818411 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bbd6752-a525-4feb-a959-4871a4495cc8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.829366 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bbd6752-a525-4feb-a959-4871a4495cc8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tvxnm\" (UID: \"6bbd6752-a525-4feb-a959-4871a4495cc8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.834947 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.834926921 podStartE2EDuration="8.834926921s" podCreationTimestamp="2025-11-29 07:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.809950588 +0000 UTC m=+92.363480507" watchObservedRunningTime="2025-11-29 07:16:41.834926921 +0000 UTC m=+92.388456830" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.910047 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.910033713 podStartE2EDuration="24.910033713s" podCreationTimestamp="2025-11-29 07:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:41.886175398 +0000 UTC m=+92.439705297" watchObservedRunningTime="2025-11-29 07:16:41.910033713 +0000 UTC m=+92.463563612" Nov 29 07:16:41 crc kubenswrapper[4660]: I1129 07:16:41.916168 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" Nov 29 07:16:42 crc kubenswrapper[4660]: I1129 07:16:42.626933 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" event={"ID":"6bbd6752-a525-4feb-a959-4871a4495cc8","Type":"ContainerStarted","Data":"802caf26149cd77223b494863238869d0c9cf0c2a83a82073a06f8aeca39a872"} Nov 29 07:16:42 crc kubenswrapper[4660]: I1129 07:16:42.626974 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" event={"ID":"6bbd6752-a525-4feb-a959-4871a4495cc8","Type":"ContainerStarted","Data":"2a41a9b36abee35810106a1d1df7d1903e0f41b43f541e93135fa04cb63008d6"} Nov 29 07:16:42 crc kubenswrapper[4660]: I1129 07:16:42.645013 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tvxnm" podStartSLOduration=70.644995609 podStartE2EDuration="1m10.644995609s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:16:42.644899056 +0000 UTC m=+93.198428955" watchObservedRunningTime="2025-11-29 07:16:42.644995609 +0000 UTC m=+93.198525518" Nov 29 07:16:43 crc kubenswrapper[4660]: I1129 07:16:43.692928 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:43 crc kubenswrapper[4660]: E1129 07:16:43.693044 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:43 crc kubenswrapper[4660]: I1129 07:16:43.693216 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:43 crc kubenswrapper[4660]: E1129 07:16:43.693261 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:43 crc kubenswrapper[4660]: I1129 07:16:43.693355 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:43 crc kubenswrapper[4660]: E1129 07:16:43.693399 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:43 crc kubenswrapper[4660]: I1129 07:16:43.693783 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:43 crc kubenswrapper[4660]: E1129 07:16:43.693998 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:45 crc kubenswrapper[4660]: I1129 07:16:45.693040 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:45 crc kubenswrapper[4660]: I1129 07:16:45.693150 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:45 crc kubenswrapper[4660]: I1129 07:16:45.693186 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:45 crc kubenswrapper[4660]: E1129 07:16:45.693672 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:45 crc kubenswrapper[4660]: E1129 07:16:45.693479 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:45 crc kubenswrapper[4660]: I1129 07:16:45.693194 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:45 crc kubenswrapper[4660]: E1129 07:16:45.693760 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:45 crc kubenswrapper[4660]: E1129 07:16:45.694076 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:47 crc kubenswrapper[4660]: I1129 07:16:47.692556 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:47 crc kubenswrapper[4660]: I1129 07:16:47.693217 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:47 crc kubenswrapper[4660]: I1129 07:16:47.693254 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:47 crc kubenswrapper[4660]: I1129 07:16:47.693338 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:47 crc kubenswrapper[4660]: E1129 07:16:47.693427 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:47 crc kubenswrapper[4660]: E1129 07:16:47.693654 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:47 crc kubenswrapper[4660]: E1129 07:16:47.693945 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:47 crc kubenswrapper[4660]: E1129 07:16:47.694162 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:49 crc kubenswrapper[4660]: I1129 07:16:49.693040 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:49 crc kubenswrapper[4660]: I1129 07:16:49.693037 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:49 crc kubenswrapper[4660]: I1129 07:16:49.693061 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:49 crc kubenswrapper[4660]: I1129 07:16:49.693081 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:49 crc kubenswrapper[4660]: E1129 07:16:49.694261 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:49 crc kubenswrapper[4660]: E1129 07:16:49.694342 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:49 crc kubenswrapper[4660]: E1129 07:16:49.694418 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:49 crc kubenswrapper[4660]: E1129 07:16:49.694496 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:49 crc kubenswrapper[4660]: I1129 07:16:49.695295 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:16:49 crc kubenswrapper[4660]: E1129 07:16:49.695519 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:16:51 crc kubenswrapper[4660]: I1129 07:16:51.693103 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:51 crc kubenswrapper[4660]: I1129 07:16:51.693122 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:51 crc kubenswrapper[4660]: I1129 07:16:51.693164 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:51 crc kubenswrapper[4660]: I1129 07:16:51.693289 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:51 crc kubenswrapper[4660]: E1129 07:16:51.693936 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:51 crc kubenswrapper[4660]: E1129 07:16:51.693799 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:51 crc kubenswrapper[4660]: E1129 07:16:51.694052 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:51 crc kubenswrapper[4660]: E1129 07:16:51.694182 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:53 crc kubenswrapper[4660]: I1129 07:16:53.693396 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:53 crc kubenswrapper[4660]: I1129 07:16:53.693410 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:53 crc kubenswrapper[4660]: I1129 07:16:53.693419 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:53 crc kubenswrapper[4660]: I1129 07:16:53.693974 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:53 crc kubenswrapper[4660]: E1129 07:16:53.694076 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:53 crc kubenswrapper[4660]: E1129 07:16:53.694143 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:53 crc kubenswrapper[4660]: E1129 07:16:53.694336 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:53 crc kubenswrapper[4660]: E1129 07:16:53.694458 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:55 crc kubenswrapper[4660]: I1129 07:16:55.692913 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:55 crc kubenswrapper[4660]: E1129 07:16:55.693687 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:55 crc kubenswrapper[4660]: I1129 07:16:55.693061 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:55 crc kubenswrapper[4660]: I1129 07:16:55.693839 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:55 crc kubenswrapper[4660]: I1129 07:16:55.693009 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:55 crc kubenswrapper[4660]: E1129 07:16:55.693975 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:55 crc kubenswrapper[4660]: E1129 07:16:55.694146 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:55 crc kubenswrapper[4660]: E1129 07:16:55.694328 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:57 crc kubenswrapper[4660]: I1129 07:16:57.692555 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:57 crc kubenswrapper[4660]: I1129 07:16:57.692606 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:57 crc kubenswrapper[4660]: E1129 07:16:57.692799 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:57 crc kubenswrapper[4660]: I1129 07:16:57.692819 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:57 crc kubenswrapper[4660]: I1129 07:16:57.692869 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:57 crc kubenswrapper[4660]: E1129 07:16:57.692988 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:57 crc kubenswrapper[4660]: E1129 07:16:57.693071 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:16:57 crc kubenswrapper[4660]: E1129 07:16:57.693466 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:59 crc kubenswrapper[4660]: I1129 07:16:59.692697 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:16:59 crc kubenswrapper[4660]: I1129 07:16:59.692754 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:16:59 crc kubenswrapper[4660]: I1129 07:16:59.692760 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:16:59 crc kubenswrapper[4660]: I1129 07:16:59.694098 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:16:59 crc kubenswrapper[4660]: E1129 07:16:59.694093 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:16:59 crc kubenswrapper[4660]: E1129 07:16:59.694258 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:16:59 crc kubenswrapper[4660]: E1129 07:16:59.694893 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:16:59 crc kubenswrapper[4660]: E1129 07:16:59.695059 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:01 crc kubenswrapper[4660]: I1129 07:17:01.693644 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:01 crc kubenswrapper[4660]: I1129 07:17:01.693766 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:01 crc kubenswrapper[4660]: I1129 07:17:01.693813 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:01 crc kubenswrapper[4660]: I1129 07:17:01.693972 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:01 crc kubenswrapper[4660]: E1129 07:17:01.693958 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:01 crc kubenswrapper[4660]: E1129 07:17:01.694071 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:01 crc kubenswrapper[4660]: E1129 07:17:01.694240 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:01 crc kubenswrapper[4660]: E1129 07:17:01.694285 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:03 crc kubenswrapper[4660]: I1129 07:17:03.692883 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:03 crc kubenswrapper[4660]: I1129 07:17:03.693687 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:03 crc kubenswrapper[4660]: E1129 07:17:03.693807 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:03 crc kubenswrapper[4660]: I1129 07:17:03.693835 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:03 crc kubenswrapper[4660]: I1129 07:17:03.693856 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:03 crc kubenswrapper[4660]: E1129 07:17:03.694167 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:03 crc kubenswrapper[4660]: E1129 07:17:03.694369 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:03 crc kubenswrapper[4660]: E1129 07:17:03.694457 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:04 crc kubenswrapper[4660]: I1129 07:17:04.693272 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:17:04 crc kubenswrapper[4660]: E1129 07:17:04.693421 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:17:04 crc kubenswrapper[4660]: I1129 07:17:04.696475 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/1.log" Nov 29 07:17:04 crc kubenswrapper[4660]: I1129 07:17:04.696909 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/0.log" Nov 29 07:17:04 crc kubenswrapper[4660]: I1129 07:17:04.696954 4660 generic.go:334] "Generic (PLEG): container finished" podID="e71cb583-cccf-4345-8695-0d3a6c237a35" containerID="f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3" exitCode=1 Nov 29 07:17:04 crc kubenswrapper[4660]: I1129 07:17:04.696981 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerDied","Data":"f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3"} Nov 29 07:17:04 crc kubenswrapper[4660]: I1129 07:17:04.697009 4660 scope.go:117] "RemoveContainer" containerID="a09e876e6c513ac96715355fc12b73f3db86587862a6fc4fce963d2ce79618d3" Nov 29 07:17:04 crc kubenswrapper[4660]: I1129 07:17:04.697222 4660 scope.go:117] "RemoveContainer" containerID="f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3" Nov 29 07:17:04 crc kubenswrapper[4660]: E1129 07:17:04.697341 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-99mtq_openshift-multus(e71cb583-cccf-4345-8695-0d3a6c237a35)\"" pod="openshift-multus/multus-99mtq" podUID="e71cb583-cccf-4345-8695-0d3a6c237a35" Nov 29 07:17:05 crc kubenswrapper[4660]: I1129 07:17:05.693558 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:05 crc kubenswrapper[4660]: I1129 07:17:05.693602 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:05 crc kubenswrapper[4660]: I1129 07:17:05.693579 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:05 crc kubenswrapper[4660]: I1129 07:17:05.693760 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:05 crc kubenswrapper[4660]: E1129 07:17:05.693935 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:05 crc kubenswrapper[4660]: E1129 07:17:05.694317 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:05 crc kubenswrapper[4660]: E1129 07:17:05.694565 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:05 crc kubenswrapper[4660]: E1129 07:17:05.694980 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:05 crc kubenswrapper[4660]: I1129 07:17:05.701624 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/1.log" Nov 29 07:17:07 crc kubenswrapper[4660]: I1129 07:17:07.693223 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:07 crc kubenswrapper[4660]: I1129 07:17:07.693321 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:07 crc kubenswrapper[4660]: E1129 07:17:07.693675 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:07 crc kubenswrapper[4660]: I1129 07:17:07.693356 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:07 crc kubenswrapper[4660]: I1129 07:17:07.693332 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:07 crc kubenswrapper[4660]: E1129 07:17:07.693805 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:07 crc kubenswrapper[4660]: E1129 07:17:07.693939 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:07 crc kubenswrapper[4660]: E1129 07:17:07.693999 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:09 crc kubenswrapper[4660]: E1129 07:17:09.656210 4660 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 29 07:17:09 crc kubenswrapper[4660]: I1129 07:17:09.693519 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:09 crc kubenswrapper[4660]: I1129 07:17:09.693519 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:09 crc kubenswrapper[4660]: I1129 07:17:09.693918 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:09 crc kubenswrapper[4660]: E1129 07:17:09.694392 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:09 crc kubenswrapper[4660]: I1129 07:17:09.694409 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:09 crc kubenswrapper[4660]: E1129 07:17:09.694587 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:09 crc kubenswrapper[4660]: E1129 07:17:09.694663 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:09 crc kubenswrapper[4660]: E1129 07:17:09.694801 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:09 crc kubenswrapper[4660]: E1129 07:17:09.846205 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:17:11 crc kubenswrapper[4660]: I1129 07:17:11.693570 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:11 crc kubenswrapper[4660]: I1129 07:17:11.693654 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:11 crc kubenswrapper[4660]: I1129 07:17:11.693680 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:11 crc kubenswrapper[4660]: E1129 07:17:11.693798 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:11 crc kubenswrapper[4660]: E1129 07:17:11.693899 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:11 crc kubenswrapper[4660]: E1129 07:17:11.693983 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:11 crc kubenswrapper[4660]: I1129 07:17:11.695434 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:11 crc kubenswrapper[4660]: E1129 07:17:11.695958 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:13 crc kubenswrapper[4660]: I1129 07:17:13.693476 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:13 crc kubenswrapper[4660]: I1129 07:17:13.693645 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:13 crc kubenswrapper[4660]: I1129 07:17:13.693544 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:13 crc kubenswrapper[4660]: I1129 07:17:13.693476 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:13 crc kubenswrapper[4660]: E1129 07:17:13.693767 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:13 crc kubenswrapper[4660]: E1129 07:17:13.693934 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:13 crc kubenswrapper[4660]: E1129 07:17:13.694076 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:13 crc kubenswrapper[4660]: E1129 07:17:13.694190 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:14 crc kubenswrapper[4660]: E1129 07:17:14.847442 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:17:15 crc kubenswrapper[4660]: I1129 07:17:15.692720 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:15 crc kubenswrapper[4660]: I1129 07:17:15.692803 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:15 crc kubenswrapper[4660]: E1129 07:17:15.692854 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:15 crc kubenswrapper[4660]: I1129 07:17:15.692880 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:15 crc kubenswrapper[4660]: I1129 07:17:15.692955 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:15 crc kubenswrapper[4660]: E1129 07:17:15.693069 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:15 crc kubenswrapper[4660]: E1129 07:17:15.693150 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:15 crc kubenswrapper[4660]: E1129 07:17:15.693581 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:15 crc kubenswrapper[4660]: I1129 07:17:15.693947 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:17:15 crc kubenswrapper[4660]: E1129 07:17:15.694131 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qgvps_openshift-ovn-kubernetes(01aa307a-c2ec-4ded-8677-da549fbfba76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" Nov 29 07:17:16 crc kubenswrapper[4660]: I1129 07:17:16.693827 4660 scope.go:117] "RemoveContainer" containerID="f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3" Nov 29 07:17:17 crc kubenswrapper[4660]: I1129 07:17:17.693710 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:17 crc kubenswrapper[4660]: I1129 07:17:17.693810 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:17 crc kubenswrapper[4660]: E1129 07:17:17.693856 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:17 crc kubenswrapper[4660]: I1129 07:17:17.693740 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:17 crc kubenswrapper[4660]: I1129 07:17:17.693904 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:17 crc kubenswrapper[4660]: E1129 07:17:17.694061 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:17 crc kubenswrapper[4660]: E1129 07:17:17.694132 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:17 crc kubenswrapper[4660]: E1129 07:17:17.694215 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:17 crc kubenswrapper[4660]: I1129 07:17:17.743699 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/1.log" Nov 29 07:17:17 crc kubenswrapper[4660]: I1129 07:17:17.743758 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerStarted","Data":"ef03925e6b8c552fb905d516efb63d1ac89f995971d89cd6413d64325fc6ff3f"} Nov 29 07:17:19 crc kubenswrapper[4660]: I1129 07:17:19.692822 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:19 crc kubenswrapper[4660]: I1129 07:17:19.692850 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:19 crc kubenswrapper[4660]: I1129 07:17:19.692822 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:19 crc kubenswrapper[4660]: I1129 07:17:19.694936 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:19 crc kubenswrapper[4660]: E1129 07:17:19.694923 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:19 crc kubenswrapper[4660]: E1129 07:17:19.695015 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:19 crc kubenswrapper[4660]: E1129 07:17:19.695080 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:19 crc kubenswrapper[4660]: E1129 07:17:19.695130 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:19 crc kubenswrapper[4660]: E1129 07:17:19.848186 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:17:21 crc kubenswrapper[4660]: I1129 07:17:21.693051 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:21 crc kubenswrapper[4660]: I1129 07:17:21.693074 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:21 crc kubenswrapper[4660]: E1129 07:17:21.693450 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:21 crc kubenswrapper[4660]: I1129 07:17:21.693091 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:21 crc kubenswrapper[4660]: E1129 07:17:21.693496 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:21 crc kubenswrapper[4660]: I1129 07:17:21.693092 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:21 crc kubenswrapper[4660]: E1129 07:17:21.693537 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:21 crc kubenswrapper[4660]: E1129 07:17:21.693588 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:23 crc kubenswrapper[4660]: I1129 07:17:23.693634 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:23 crc kubenswrapper[4660]: I1129 07:17:23.693605 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:23 crc kubenswrapper[4660]: I1129 07:17:23.693745 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:23 crc kubenswrapper[4660]: I1129 07:17:23.693646 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:23 crc kubenswrapper[4660]: E1129 07:17:23.693795 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:23 crc kubenswrapper[4660]: E1129 07:17:23.693915 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:23 crc kubenswrapper[4660]: E1129 07:17:23.693953 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:23 crc kubenswrapper[4660]: E1129 07:17:23.694023 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:24 crc kubenswrapper[4660]: E1129 07:17:24.849861 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:17:25 crc kubenswrapper[4660]: I1129 07:17:25.693033 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:25 crc kubenswrapper[4660]: I1129 07:17:25.693126 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:25 crc kubenswrapper[4660]: I1129 07:17:25.693199 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:25 crc kubenswrapper[4660]: E1129 07:17:25.693213 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:25 crc kubenswrapper[4660]: I1129 07:17:25.693272 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:25 crc kubenswrapper[4660]: E1129 07:17:25.693372 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:25 crc kubenswrapper[4660]: E1129 07:17:25.693563 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:25 crc kubenswrapper[4660]: E1129 07:17:25.693632 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:27 crc kubenswrapper[4660]: I1129 07:17:27.693319 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:27 crc kubenswrapper[4660]: I1129 07:17:27.693362 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:27 crc kubenswrapper[4660]: I1129 07:17:27.693326 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:27 crc kubenswrapper[4660]: I1129 07:17:27.693414 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:27 crc kubenswrapper[4660]: E1129 07:17:27.693543 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:27 crc kubenswrapper[4660]: E1129 07:17:27.693670 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:27 crc kubenswrapper[4660]: E1129 07:17:27.693771 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:27 crc kubenswrapper[4660]: E1129 07:17:27.693865 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:28 crc kubenswrapper[4660]: I1129 07:17:28.694273 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:17:29 crc kubenswrapper[4660]: I1129 07:17:29.693291 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:29 crc kubenswrapper[4660]: I1129 07:17:29.693351 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:29 crc kubenswrapper[4660]: I1129 07:17:29.693391 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:29 crc kubenswrapper[4660]: E1129 07:17:29.694495 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:29 crc kubenswrapper[4660]: I1129 07:17:29.694512 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:29 crc kubenswrapper[4660]: E1129 07:17:29.694671 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:29 crc kubenswrapper[4660]: E1129 07:17:29.694754 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:29 crc kubenswrapper[4660]: E1129 07:17:29.694823 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:29 crc kubenswrapper[4660]: E1129 07:17:29.850742 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 29 07:17:30 crc kubenswrapper[4660]: I1129 07:17:30.786982 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/3.log" Nov 29 07:17:30 crc kubenswrapper[4660]: I1129 07:17:30.789934 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerStarted","Data":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} Nov 29 07:17:30 crc kubenswrapper[4660]: I1129 07:17:30.790439 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:17:30 crc kubenswrapper[4660]: I1129 07:17:30.819669 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podStartSLOduration=119.819651797 podStartE2EDuration="1m59.819651797s" podCreationTimestamp="2025-11-29 07:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:30.819244695 +0000 UTC m=+141.372774604" watchObservedRunningTime="2025-11-29 07:17:30.819651797 +0000 UTC m=+141.373181696" Nov 29 07:17:31 crc kubenswrapper[4660]: I1129 07:17:31.268432 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xvjdn"] Nov 29 07:17:31 crc kubenswrapper[4660]: I1129 07:17:31.268561 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:31 crc kubenswrapper[4660]: E1129 07:17:31.268677 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:31 crc kubenswrapper[4660]: I1129 07:17:31.692835 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:31 crc kubenswrapper[4660]: I1129 07:17:31.692837 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:31 crc kubenswrapper[4660]: E1129 07:17:31.692978 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:31 crc kubenswrapper[4660]: I1129 07:17:31.692837 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:31 crc kubenswrapper[4660]: E1129 07:17:31.693098 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:31 crc kubenswrapper[4660]: E1129 07:17:31.693172 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:32 crc kubenswrapper[4660]: I1129 07:17:32.693205 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:32 crc kubenswrapper[4660]: E1129 07:17:32.693376 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:33 crc kubenswrapper[4660]: I1129 07:17:33.694591 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:33 crc kubenswrapper[4660]: E1129 07:17:33.694752 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 29 07:17:33 crc kubenswrapper[4660]: I1129 07:17:33.694824 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:33 crc kubenswrapper[4660]: I1129 07:17:33.694595 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:33 crc kubenswrapper[4660]: E1129 07:17:33.694950 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 29 07:17:33 crc kubenswrapper[4660]: E1129 07:17:33.694982 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 29 07:17:34 crc kubenswrapper[4660]: I1129 07:17:34.693224 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:34 crc kubenswrapper[4660]: E1129 07:17:34.693507 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xvjdn" podUID="58b9294e-0d4f-4671-b4ad-513b428cc45d" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.500439 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.500497 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.692650 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.693035 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.693371 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.695204 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.695222 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.696759 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:17:35 crc kubenswrapper[4660]: I1129 07:17:35.698135 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:17:36 crc kubenswrapper[4660]: I1129 07:17:36.692997 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:36 crc kubenswrapper[4660]: I1129 07:17:36.695249 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:17:36 crc kubenswrapper[4660]: I1129 07:17:36.695387 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.578069 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:37 crc kubenswrapper[4660]: E1129 07:17:37.578285 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:19:39.578243687 +0000 UTC m=+270.131773626 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.679166 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.679233 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.679284 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.679346 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.680585 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.688822 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.688983 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.690285 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.810939 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.818834 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:37 crc kubenswrapper[4660]: I1129 07:17:37.823764 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 29 07:17:38 crc kubenswrapper[4660]: W1129 07:17:38.051960 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-e75b401e4fb41b994781f6fbd020b98fc0f6e7b077310349d268336f885573ae WatchSource:0}: Error finding container e75b401e4fb41b994781f6fbd020b98fc0f6e7b077310349d268336f885573ae: Status 404 returned error can't find the container with id e75b401e4fb41b994781f6fbd020b98fc0f6e7b077310349d268336f885573ae Nov 29 07:17:38 crc kubenswrapper[4660]: W1129 07:17:38.108759 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f5203cb237cc15c84158fa721b84341deb3820ff575a8179b71d4119b607f0bd WatchSource:0}: Error finding container f5203cb237cc15c84158fa721b84341deb3820ff575a8179b71d4119b607f0bd: Status 404 returned error can't find the container with id f5203cb237cc15c84158fa721b84341deb3820ff575a8179b71d4119b607f0bd Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.172748 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.821574 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ef5bf907988698e2aad33c407f9e15570f28793865ed790a615c9f0babf2f30f"} Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.821642 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5184bc6091fd5a44158c33105e58e55140f1511226a77a13b997fc061ea83165"} Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.823230 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ee2575a9021991ad1fa2eb95bda2ea24aa1df1d6eee3d02a260e20d10191650f"} Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.823323 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e75b401e4fb41b994781f6fbd020b98fc0f6e7b077310349d268336f885573ae"} Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.823523 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.825242 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"914a6a98aa3db3fe6b6adc87ab2e950c4621d6190c1a6d10c11777a252bb73ca"} Nov 29 07:17:38 crc kubenswrapper[4660]: I1129 07:17:38.825287 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f5203cb237cc15c84158fa721b84341deb3820ff575a8179b71d4119b607f0bd"} Nov 29 07:17:39 crc kubenswrapper[4660]: I1129 07:17:39.601197 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:39 crc kubenswrapper[4660]: I1129 07:17:39.606469 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58b9294e-0d4f-4671-b4ad-513b428cc45d-metrics-certs\") pod \"network-metrics-daemon-xvjdn\" (UID: \"58b9294e-0d4f-4671-b4ad-513b428cc45d\") " pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:39 crc kubenswrapper[4660]: I1129 07:17:39.708809 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xvjdn" Nov 29 07:17:39 crc kubenswrapper[4660]: I1129 07:17:39.911991 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xvjdn"] Nov 29 07:17:40 crc kubenswrapper[4660]: I1129 07:17:40.837773 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" event={"ID":"58b9294e-0d4f-4671-b4ad-513b428cc45d","Type":"ContainerStarted","Data":"2b946c4b9d13226df25dcd73310e44e68d1c25b8f361abcf7875db34fa016221"} Nov 29 07:17:40 crc kubenswrapper[4660]: I1129 07:17:40.838235 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" event={"ID":"58b9294e-0d4f-4671-b4ad-513b428cc45d","Type":"ContainerStarted","Data":"da6e18faac7201ba45d20806189986799e41d67abf96bf0a975e0ce4004533de"} Nov 29 07:17:40 crc kubenswrapper[4660]: I1129 07:17:40.838269 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xvjdn" event={"ID":"58b9294e-0d4f-4671-b4ad-513b428cc45d","Type":"ContainerStarted","Data":"856a9474cd2d8384f5b7eed31965b6e1e5f5c679a6e01377e306d8fa4e5e247d"} Nov 29 07:17:40 crc kubenswrapper[4660]: I1129 07:17:40.857132 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xvjdn" podStartSLOduration=129.857110207 podStartE2EDuration="2m9.857110207s" podCreationTimestamp="2025-11-29 07:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:40.856157171 +0000 UTC m=+151.409687170" watchObservedRunningTime="2025-11-29 07:17:40.857110207 +0000 UTC m=+151.410640116" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.000641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.033231 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm8tt"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.033630 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.033958 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.034299 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.037095 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.037228 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.037458 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.037882 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tmccw"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.038416 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.039099 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7j5ts"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.039577 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.040052 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.040405 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.040927 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.041267 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.041278 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.041711 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xfw5p"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.042009 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.043464 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.043697 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.044025 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-kpp2s"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.044199 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.044281 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8qjn8"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.044510 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.044809 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kpp2s" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.045206 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.045650 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.046594 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fk72t"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.046912 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.047138 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dwgp"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.050673 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.051055 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.053403 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bhg29"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.054063 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.054071 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.078570 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.120070 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.120260 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.133950 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.134145 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.134253 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.136878 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.136909 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxpj\" (UniqueName: \"kubernetes.io/projected/f46e1d0c-84fc-4518-9101-a64174cee99a-kube-api-access-lfxpj\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.136929 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.136945 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-config\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.136960 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f0474a-87d7-45e8-8bd4-036610a71240-serving-cert\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.136976 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfqxt\" (UniqueName: \"kubernetes.io/projected/6051c490-f396-4257-a4f8-e0c8a1bcf910-kube-api-access-bfqxt\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.136991 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbvz\" (UniqueName: \"kubernetes.io/projected/33653b7e-b48e-447a-84ed-a21dc8b827ac-kube-api-access-svbvz\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137007 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmk67\" (UniqueName: \"kubernetes.io/projected/133a42bf-5cdf-4614-8a42-4ce3e350481e-kube-api-access-cmk67\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137023 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-serving-cert\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137039 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-oauth-config\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137052 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137066 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-encryption-config\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137079 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137094 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-image-import-ca\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-policies\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137122 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkx4n\" (UniqueName: \"kubernetes.io/projected/a2998d6f-01b6-4b4a-a5ca-44412d764e16-kube-api-access-pkx4n\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137139 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133a42bf-5cdf-4614-8a42-4ce3e350481e-config\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137154 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/133a42bf-5cdf-4614-8a42-4ce3e350481e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137169 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d707878c-74ce-4dc3-88a0-84845ff53208-machine-approver-tls\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137185 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-oauth-serving-cert\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137200 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-service-ca-bundle\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137215 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137230 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7f96c6-f025-4afa-98c2-be96b842ce15-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137244 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7zzn\" (UniqueName: \"kubernetes.io/projected/dac3d607-3725-4a88-95f8-dca21e0bd0e1-kube-api-access-x7zzn\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137259 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137273 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d707878c-74ce-4dc3-88a0-84845ff53208-config\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137288 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlb4s\" (UniqueName: \"kubernetes.io/projected/d707878c-74ce-4dc3-88a0-84845ff53208-kube-api-access-nlb4s\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137303 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206c7efc-d3fd-4650-b1fc-89602cff0109-config\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137318 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-trusted-ca-bundle\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137334 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-etcd-client\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137347 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-etcd-serving-ca\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137362 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137377 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6spf\" (UniqueName: \"kubernetes.io/projected/aacf3710-663f-4cfa-aa89-7bbc848e094d-kube-api-access-s6spf\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137392 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52ggx\" (UniqueName: \"kubernetes.io/projected/206c7efc-d3fd-4650-b1fc-89602cff0109-kube-api-access-52ggx\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137407 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/133a42bf-5cdf-4614-8a42-4ce3e350481e-images\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137422 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137474 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-etcd-client\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137492 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-client-ca\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137506 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac3d607-3725-4a88-95f8-dca21e0bd0e1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137521 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a2998d6f-01b6-4b4a-a5ca-44412d764e16-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137535 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-serving-cert\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137550 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-client-ca\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137566 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137581 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137594 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6051c490-f396-4257-a4f8-e0c8a1bcf910-serving-cert\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137624 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-dir\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137642 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137657 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-serving-cert\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137714 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmgv\" (UniqueName: \"kubernetes.io/projected/6fdca584-ca4e-44ea-b149-bf27b1896eca-kube-api-access-bhmgv\") pod \"downloads-7954f5f757-kpp2s\" (UID: \"6fdca584-ca4e-44ea-b149-bf27b1896eca\") " pod="openshift-console/downloads-7954f5f757-kpp2s" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137795 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-encryption-config\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137816 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d707878c-74ce-4dc3-88a0-84845ff53208-auth-proxy-config\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137831 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33653b7e-b48e-447a-84ed-a21dc8b827ac-audit-dir\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137866 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjgf\" (UniqueName: \"kubernetes.io/projected/058d4627-e6bf-4ce0-a769-846ddc9b6687-kube-api-access-lpjgf\") pod \"cluster-samples-operator-665b6dd947-x466n\" (UID: \"058d4627-e6bf-4ce0-a769-846ddc9b6687\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137884 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-config\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137899 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-console-config\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137915 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137930 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206c7efc-d3fd-4650-b1fc-89602cff0109-serving-cert\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137946 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqnt2\" (UniqueName: \"kubernetes.io/projected/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-kube-api-access-kqnt2\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.137979 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/33653b7e-b48e-447a-84ed-a21dc8b827ac-node-pullsecrets\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138024 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138048 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7f96c6-f025-4afa-98c2-be96b842ce15-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138074 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-service-ca\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138097 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjng\" (UniqueName: \"kubernetes.io/projected/b6f0474a-87d7-45e8-8bd4-036610a71240-kube-api-access-8tjng\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138139 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-config\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138164 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf3710-663f-4cfa-aa89-7bbc848e094d-serving-cert\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138179 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-audit-policies\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138203 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138224 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e7f96c6-f025-4afa-98c2-be96b842ce15-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138299 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-audit\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138330 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/206c7efc-d3fd-4650-b1fc-89602cff0109-trusted-ca\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138356 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9330cc5-8397-4c11-9ba6-764f28128d7b-audit-dir\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138377 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2998d6f-01b6-4b4a-a5ca-44412d764e16-serving-cert\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138403 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-config\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138426 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138451 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg6bt\" (UniqueName: \"kubernetes.io/projected/5e7f96c6-f025-4afa-98c2-be96b842ce15-kube-api-access-hg6bt\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138480 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/058d4627-e6bf-4ce0-a769-846ddc9b6687-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x466n\" (UID: \"058d4627-e6bf-4ce0-a769-846ddc9b6687\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138509 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac3d607-3725-4a88-95f8-dca21e0bd0e1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138535 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xfj2\" (UniqueName: \"kubernetes.io/projected/e9330cc5-8397-4c11-9ba6-764f28128d7b-kube-api-access-2xfj2\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.138553 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.144575 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.145089 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.145392 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.145920 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.145932 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146093 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146223 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146242 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146329 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146545 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146688 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146755 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146830 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146949 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147050 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.146951 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147241 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147249 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147343 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147444 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147596 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147777 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.147906 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.148029 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.148147 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.148248 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.148344 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.148644 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.148797 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.149907 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.150050 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.150164 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.150249 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.150427 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.150648 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.150760 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.150882 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151062 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151179 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151288 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151418 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151509 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm8tt"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151599 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151740 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.151840 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.153355 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.153517 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.153664 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.153784 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165047 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165250 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165368 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165476 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165527 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165730 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165799 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165895 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.165978 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.166039 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.166101 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.170447 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.170579 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.171140 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44llw"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.171601 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.179010 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.179203 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.196713 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.220563 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.221112 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.222211 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zzkzd"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.222798 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.231204 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.231621 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.231903 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.232089 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.232363 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.232512 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.232777 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.233033 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.233297 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.234977 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.234976 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rbqps"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.235143 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.235253 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.235360 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.235409 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.237808 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.238320 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.238391 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239211 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-audit\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239230 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/206c7efc-d3fd-4650-b1fc-89602cff0109-trusted-ca\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239249 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9330cc5-8397-4c11-9ba6-764f28128d7b-audit-dir\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239264 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2998d6f-01b6-4b4a-a5ca-44412d764e16-serving-cert\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239279 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-config\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239294 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239309 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg6bt\" (UniqueName: \"kubernetes.io/projected/5e7f96c6-f025-4afa-98c2-be96b842ce15-kube-api-access-hg6bt\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239324 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/058d4627-e6bf-4ce0-a769-846ddc9b6687-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x466n\" (UID: \"058d4627-e6bf-4ce0-a769-846ddc9b6687\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239338 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac3d607-3725-4a88-95f8-dca21e0bd0e1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239354 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xfj2\" (UniqueName: \"kubernetes.io/projected/e9330cc5-8397-4c11-9ba6-764f28128d7b-kube-api-access-2xfj2\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239369 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239385 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239401 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfxpj\" (UniqueName: \"kubernetes.io/projected/f46e1d0c-84fc-4518-9101-a64174cee99a-kube-api-access-lfxpj\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239416 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239430 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-config\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239443 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f0474a-87d7-45e8-8bd4-036610a71240-serving-cert\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239459 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfqxt\" (UniqueName: \"kubernetes.io/projected/6051c490-f396-4257-a4f8-e0c8a1bcf910-kube-api-access-bfqxt\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239475 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svbvz\" (UniqueName: \"kubernetes.io/projected/33653b7e-b48e-447a-84ed-a21dc8b827ac-kube-api-access-svbvz\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239489 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmk67\" (UniqueName: \"kubernetes.io/projected/133a42bf-5cdf-4614-8a42-4ce3e350481e-kube-api-access-cmk67\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239505 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-serving-cert\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239520 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-oauth-config\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239537 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239553 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-encryption-config\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239568 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239583 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-image-import-ca\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239599 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-policies\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239637 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkx4n\" (UniqueName: \"kubernetes.io/projected/a2998d6f-01b6-4b4a-a5ca-44412d764e16-kube-api-access-pkx4n\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239656 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133a42bf-5cdf-4614-8a42-4ce3e350481e-config\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239672 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/133a42bf-5cdf-4614-8a42-4ce3e350481e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239688 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d707878c-74ce-4dc3-88a0-84845ff53208-machine-approver-tls\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239706 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-oauth-serving-cert\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239722 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-service-ca-bundle\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239744 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239766 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7f96c6-f025-4afa-98c2-be96b842ce15-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239800 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7zzn\" (UniqueName: \"kubernetes.io/projected/dac3d607-3725-4a88-95f8-dca21e0bd0e1-kube-api-access-x7zzn\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239819 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239833 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d707878c-74ce-4dc3-88a0-84845ff53208-config\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239849 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlb4s\" (UniqueName: \"kubernetes.io/projected/d707878c-74ce-4dc3-88a0-84845ff53208-kube-api-access-nlb4s\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239863 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206c7efc-d3fd-4650-b1fc-89602cff0109-config\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239878 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-trusted-ca-bundle\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239895 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-etcd-client\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239909 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-etcd-serving-ca\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239915 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v727f"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239926 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239943 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6spf\" (UniqueName: \"kubernetes.io/projected/aacf3710-663f-4cfa-aa89-7bbc848e094d-kube-api-access-s6spf\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239958 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52ggx\" (UniqueName: \"kubernetes.io/projected/206c7efc-d3fd-4650-b1fc-89602cff0109-kube-api-access-52ggx\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239971 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/133a42bf-5cdf-4614-8a42-4ce3e350481e-images\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.239986 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240000 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-etcd-client\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240022 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-client-ca\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240036 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac3d607-3725-4a88-95f8-dca21e0bd0e1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240052 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a2998d6f-01b6-4b4a-a5ca-44412d764e16-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240066 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-serving-cert\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240080 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-client-ca\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240095 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240111 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240125 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6051c490-f396-4257-a4f8-e0c8a1bcf910-serving-cert\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240142 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-dir\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240157 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240175 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-serving-cert\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240189 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmgv\" (UniqueName: \"kubernetes.io/projected/6fdca584-ca4e-44ea-b149-bf27b1896eca-kube-api-access-bhmgv\") pod \"downloads-7954f5f757-kpp2s\" (UID: \"6fdca584-ca4e-44ea-b149-bf27b1896eca\") " pod="openshift-console/downloads-7954f5f757-kpp2s" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240203 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-encryption-config\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240219 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d707878c-74ce-4dc3-88a0-84845ff53208-auth-proxy-config\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240233 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33653b7e-b48e-447a-84ed-a21dc8b827ac-audit-dir\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240248 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjgf\" (UniqueName: \"kubernetes.io/projected/058d4627-e6bf-4ce0-a769-846ddc9b6687-kube-api-access-lpjgf\") pod \"cluster-samples-operator-665b6dd947-x466n\" (UID: \"058d4627-e6bf-4ce0-a769-846ddc9b6687\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240263 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-config\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240279 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-console-config\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240293 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240308 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206c7efc-d3fd-4650-b1fc-89602cff0109-serving-cert\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240323 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqnt2\" (UniqueName: \"kubernetes.io/projected/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-kube-api-access-kqnt2\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240337 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/33653b7e-b48e-447a-84ed-a21dc8b827ac-node-pullsecrets\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240352 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240367 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7f96c6-f025-4afa-98c2-be96b842ce15-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240381 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-service-ca\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240397 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tjng\" (UniqueName: \"kubernetes.io/projected/b6f0474a-87d7-45e8-8bd4-036610a71240-kube-api-access-8tjng\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240413 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-config\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240427 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf3710-663f-4cfa-aa89-7bbc848e094d-serving-cert\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240441 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-audit-policies\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240456 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240472 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e7f96c6-f025-4afa-98c2-be96b842ce15-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.240842 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.241131 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-policies\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.241185 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-image-import-ca\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.241847 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133a42bf-5cdf-4614-8a42-4ce3e350481e-config\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.241849 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d707878c-74ce-4dc3-88a0-84845ff53208-config\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.242003 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-audit\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.242416 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9330cc5-8397-4c11-9ba6-764f28128d7b-audit-dir\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.242528 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206c7efc-d3fd-4650-b1fc-89602cff0109-config\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.242932 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-config\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.243077 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.243547 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.244758 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.245189 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.247192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d707878c-74ce-4dc3-88a0-84845ff53208-auth-proxy-config\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.247262 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33653b7e-b48e-447a-84ed-a21dc8b827ac-audit-dir\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.247801 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-config\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.248378 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-console-config\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.248746 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.251389 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/133a42bf-5cdf-4614-8a42-4ce3e350481e-images\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.288213 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/33653b7e-b48e-447a-84ed-a21dc8b827ac-node-pullsecrets\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.289230 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.289983 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-service-ca\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.291472 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-config\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.295559 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dac3d607-3725-4a88-95f8-dca21e0bd0e1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.298592 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.302953 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-audit-policies\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.299816 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-oauth-serving-cert\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.300198 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-service-ca-bundle\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.303750 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-etcd-client\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.303770 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-encryption-config\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.304120 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.304643 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-client-ca\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.307023 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/133a42bf-5cdf-4614-8a42-4ce3e350481e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.307334 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.307894 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.310884 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/058d4627-e6bf-4ce0-a769-846ddc9b6687-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x466n\" (UID: \"058d4627-e6bf-4ce0-a769-846ddc9b6687\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.311413 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a2998d6f-01b6-4b4a-a5ca-44412d764e16-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.312201 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-etcd-serving-ca\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.312446 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.312884 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-etcd-client\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.314059 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-client-ca\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.315261 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.317869 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-dir\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.318174 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.326152 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e7f96c6-f025-4afa-98c2-be96b842ce15-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.321460 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dac3d607-3725-4a88-95f8-dca21e0bd0e1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.299215 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d707878c-74ce-4dc3-88a0-84845ff53208-machine-approver-tls\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.322319 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.324859 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-config\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.319261 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33653b7e-b48e-447a-84ed-a21dc8b827ac-serving-cert\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.327002 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9330cc5-8397-4c11-9ba6-764f28128d7b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.331356 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6051c490-f396-4257-a4f8-e0c8a1bcf910-serving-cert\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.331817 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206c7efc-d3fd-4650-b1fc-89602cff0109-serving-cert\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.332208 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.332894 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf3710-663f-4cfa-aa89-7bbc848e094d-serving-cert\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.349171 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.349199 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.349907 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-encryption-config\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.350494 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-serving-cert\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.350511 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-oauth-config\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.350844 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.351826 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2998d6f-01b6-4b4a-a5ca-44412d764e16-serving-cert\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.352115 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9330cc5-8397-4c11-9ba6-764f28128d7b-serving-cert\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.352468 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f0474a-87d7-45e8-8bd4-036610a71240-serving-cert\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.352882 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.355085 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.356161 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.356634 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.356904 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.356925 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.357014 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.357042 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.358331 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7j5ts"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.358972 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.360391 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.361117 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.362108 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.362184 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.362861 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kpp2s"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.364019 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.364955 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.365457 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.367694 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.373921 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.374531 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.374755 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.375692 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.375795 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8qjn8"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.376420 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.378192 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.378281 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.378320 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.379167 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.379740 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.380355 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.380437 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.380977 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7f96c6-f025-4afa-98c2-be96b842ce15-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.381269 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.382133 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.382170 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.382294 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.382329 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.382837 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.384867 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/206c7efc-d3fd-4650-b1fc-89602cff0109-trusted-ca\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.385761 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.385882 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6f0474a-87d7-45e8-8bd4-036610a71240-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.387161 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.389642 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-st7q5"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.389857 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.391497 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.393138 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33653b7e-b48e-447a-84ed-a21dc8b827ac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.395814 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bhg29"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.395978 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-trusted-ca-bundle\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.396430 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.397645 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sggfx"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.400381 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.400924 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.401535 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.402199 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-974tz"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.402783 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.403623 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5nhqp"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.404083 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.404979 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s945x"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.405599 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.406551 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.407059 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.408216 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.408695 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.409679 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.410191 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.410841 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.411233 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.412188 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44llw"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.413630 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tmccw"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.415008 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.416147 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zzkzd"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.416655 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.417416 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.418803 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v727f"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.420115 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xfw5p"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.421941 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dwgp"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.423693 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fk72t"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.427008 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.428342 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.429098 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.429482 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.430894 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.432519 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-g2lfb"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.433418 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.433916 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.435597 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.436323 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.438249 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-974tz"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.440088 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.441393 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sggfx"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.442832 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tm95v"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.443442 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.444336 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.445748 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-smxsv"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.446873 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.447078 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.448561 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-st7q5"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.450675 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.452728 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.454279 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tm95v"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.456373 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.456736 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.458239 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5nhqp"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.459592 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.461492 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.463390 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-smxsv"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.465000 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s945x"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.476512 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.496785 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.516181 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.536272 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.574226 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6spf\" (UniqueName: \"kubernetes.io/projected/aacf3710-663f-4cfa-aa89-7bbc848e094d-kube-api-access-s6spf\") pod \"route-controller-manager-6576b87f9c-x4nv9\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.575989 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.596428 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.616461 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.637271 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.666575 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.673507 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7zzn\" (UniqueName: \"kubernetes.io/projected/dac3d607-3725-4a88-95f8-dca21e0bd0e1-kube-api-access-x7zzn\") pod \"openshift-apiserver-operator-796bbdcf4f-4qwd8\" (UID: \"dac3d607-3725-4a88-95f8-dca21e0bd0e1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.693129 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.695962 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkx4n\" (UniqueName: \"kubernetes.io/projected/a2998d6f-01b6-4b4a-a5ca-44412d764e16-kube-api-access-pkx4n\") pod \"openshift-config-operator-7777fb866f-bhg29\" (UID: \"a2998d6f-01b6-4b4a-a5ca-44412d764e16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.712819 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlb4s\" (UniqueName: \"kubernetes.io/projected/d707878c-74ce-4dc3-88a0-84845ff53208-kube-api-access-nlb4s\") pod \"machine-approver-56656f9798-5tdv4\" (UID: \"d707878c-74ce-4dc3-88a0-84845ff53208\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.731483 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg6bt\" (UniqueName: \"kubernetes.io/projected/5e7f96c6-f025-4afa-98c2-be96b842ce15-kube-api-access-hg6bt\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.738764 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.748969 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.758154 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.800026 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.803381 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52ggx\" (UniqueName: \"kubernetes.io/projected/206c7efc-d3fd-4650-b1fc-89602cff0109-kube-api-access-52ggx\") pod \"console-operator-58897d9998-fk72t\" (UID: \"206c7efc-d3fd-4650-b1fc-89602cff0109\") " pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.817296 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.834656 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.836196 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.844731 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.856820 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" event={"ID":"d707878c-74ce-4dc3-88a0-84845ff53208","Type":"ContainerStarted","Data":"3c773bbd60d431c109d7418b222d256a8d9808963c8683fc17d3acf529312f57"} Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.856986 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.876808 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.890433 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8"] Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.901492 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.917319 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.936865 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.987148 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjgf\" (UniqueName: \"kubernetes.io/projected/058d4627-e6bf-4ce0-a769-846ddc9b6687-kube-api-access-lpjgf\") pod \"cluster-samples-operator-665b6dd947-x466n\" (UID: \"058d4627-e6bf-4ce0-a769-846ddc9b6687\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:43 crc kubenswrapper[4660]: I1129 07:17:43.990044 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqnt2\" (UniqueName: \"kubernetes.io/projected/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-kube-api-access-kqnt2\") pod \"oauth-openshift-558db77b4-8dwgp\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.016439 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tjng\" (UniqueName: \"kubernetes.io/projected/b6f0474a-87d7-45e8-8bd4-036610a71240-kube-api-access-8tjng\") pod \"authentication-operator-69f744f599-xfw5p\" (UID: \"b6f0474a-87d7-45e8-8bd4-036610a71240\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.058452 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xfj2\" (UniqueName: \"kubernetes.io/projected/e9330cc5-8397-4c11-9ba6-764f28128d7b-kube-api-access-2xfj2\") pod \"apiserver-7bbb656c7d-rjxhb\" (UID: \"e9330cc5-8397-4c11-9ba6-764f28128d7b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.061420 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.071807 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7f96c6-f025-4afa-98c2-be96b842ce15-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-crwdj\" (UID: \"5e7f96c6-f025-4afa-98c2-be96b842ce15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.078120 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfxpj\" (UniqueName: \"kubernetes.io/projected/f46e1d0c-84fc-4518-9101-a64174cee99a-kube-api-access-lfxpj\") pod \"console-f9d7485db-8qjn8\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.078930 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.098059 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.110558 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.111807 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmgv\" (UniqueName: \"kubernetes.io/projected/6fdca584-ca4e-44ea-b149-bf27b1896eca-kube-api-access-bhmgv\") pod \"downloads-7954f5f757-kpp2s\" (UID: \"6fdca584-ca4e-44ea-b149-bf27b1896eca\") " pod="openshift-console/downloads-7954f5f757-kpp2s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.116628 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.123764 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.141339 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.151234 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.156257 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.196761 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svbvz\" (UniqueName: \"kubernetes.io/projected/33653b7e-b48e-447a-84ed-a21dc8b827ac-kube-api-access-svbvz\") pod \"apiserver-76f77b778f-tmccw\" (UID: \"33653b7e-b48e-447a-84ed-a21dc8b827ac\") " pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.215450 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfqxt\" (UniqueName: \"kubernetes.io/projected/6051c490-f396-4257-a4f8-e0c8a1bcf910-kube-api-access-bfqxt\") pod \"controller-manager-879f6c89f-sm8tt\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.253629 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmk67\" (UniqueName: \"kubernetes.io/projected/133a42bf-5cdf-4614-8a42-4ce3e350481e-kube-api-access-cmk67\") pod \"machine-api-operator-5694c8668f-7j5ts\" (UID: \"133a42bf-5cdf-4614-8a42-4ce3e350481e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270270 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270304 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-certificates\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270324 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-client\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270350 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-trusted-ca\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270367 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m494d\" (UniqueName: \"kubernetes.io/projected/8278af76-59f6-440c-a724-ee73498ea89f-kube-api-access-m494d\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270399 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c8182cd-5593-4989-a633-74f2115ed6b5-serving-cert\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270413 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpp55\" (UniqueName: \"kubernetes.io/projected/6c8182cd-5593-4989-a633-74f2115ed6b5-kube-api-access-zpp55\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270429 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-default-certificate\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270453 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270490 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-bound-sa-token\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270532 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-service-ca\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270550 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-stats-auth\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270566 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270583 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvx5\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-kube-api-access-7vvx5\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270598 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8278af76-59f6-440c-a724-ee73498ea89f-service-ca-bundle\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270675 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-ca\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270702 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-tls\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270718 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-metrics-certs\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.270738 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-config\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.284009 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:44.783992428 +0000 UTC m=+155.337522327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.292007 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.292459 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.296307 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.303590 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.304009 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xfw5p"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.312821 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bhg29"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.318147 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.320932 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.330939 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.340973 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.372203 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.372869 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373024 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tld9\" (UniqueName: \"kubernetes.io/projected/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-kube-api-access-4tld9\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373054 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eca38a1-1f85-4651-93d7-d6fa8294920a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373080 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v56q\" (UniqueName: \"kubernetes.io/projected/210c4d43-9381-4d14-a0df-dfaa770fc67c-kube-api-access-6v56q\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.373527 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:44.873097355 +0000 UTC m=+155.426627254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373551 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-ca\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373574 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5dq\" (UniqueName: \"kubernetes.io/projected/cf0decbd-7060-4501-b70c-88462984d70c-kube-api-access-rs5dq\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373775 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-srv-cert\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373795 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ced04116-5acd-4171-934a-5a92cbd8a4aa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.373815 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba155bca-f84f-4349-9384-03d3fcdb8de0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.376428 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-ca\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.376457 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.376784 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-mountpoint-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.376807 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-plugins-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.376904 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:44.876891018 +0000 UTC m=+155.430420917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.378406 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zxv6\" (UniqueName: \"kubernetes.io/projected/58979674-f1a9-45e9-9dbe-83b07b421682-kube-api-access-7zxv6\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.378622 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c8182cd-5593-4989-a633-74f2115ed6b5-serving-cert\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.378650 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpp55\" (UniqueName: \"kubernetes.io/projected/6c8182cd-5593-4989-a633-74f2115ed6b5-kube-api-access-zpp55\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.378847 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-socket-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.378867 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-registration-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.378886 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0decbd-7060-4501-b70c-88462984d70c-config\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.379087 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba155bca-f84f-4349-9384-03d3fcdb8de0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.379112 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2002833a-4b23-4192-83d1-dd00a412504e-cert\") pod \"ingress-canary-tm95v\" (UID: \"2002833a-4b23-4192-83d1-dd00a412504e\") " pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.379130 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzsv\" (UniqueName: \"kubernetes.io/projected/1a2b044c-0f98-459f-99d3-e836134cf09b-kube-api-access-dmzsv\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.379331 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-srv-cert\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.379349 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-signing-key\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.379372 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0decbd-7060-4501-b70c-88462984d70c-serving-cert\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.379397 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.384872 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-bound-sa-token\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385321 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385337 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/210c4d43-9381-4d14-a0df-dfaa770fc67c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385474 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-stats-auth\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385502 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394a349c-92b8-437a-910c-013d3da3b144-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385522 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvx5\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-kube-api-access-7vvx5\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385719 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qccwb\" (UniqueName: \"kubernetes.io/projected/de70bbd5-2757-4733-9617-51928ad8c363-kube-api-access-qccwb\") pod \"package-server-manager-789f6589d5-6jxb6\" (UID: \"de70bbd5-2757-4733-9617-51928ad8c363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385744 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a2b044c-0f98-459f-99d3-e836134cf09b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385762 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385965 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394a349c-92b8-437a-910c-013d3da3b144-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.385994 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a2b044c-0f98-459f-99d3-e836134cf09b-proxy-tls\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386318 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced04116-5acd-4171-934a-5a92cbd8a4aa-config\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386529 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-tls\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386563 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-metrics-certs\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386768 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-config\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386793 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wccgn\" (UniqueName: \"kubernetes.io/projected/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-kube-api-access-wccgn\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386812 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-signing-cabundle\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386829 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/de70bbd5-2757-4733-9617-51928ad8c363-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6jxb6\" (UID: \"de70bbd5-2757-4733-9617-51928ad8c363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386849 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88w72\" (UniqueName: \"kubernetes.io/projected/2002833a-4b23-4192-83d1-dd00a412504e-kube-api-access-88w72\") pod \"ingress-canary-tm95v\" (UID: \"2002833a-4b23-4192-83d1-dd00a412504e\") " pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386865 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eca38a1-1f85-4651-93d7-d6fa8294920a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386890 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-certificates\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386906 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-client\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386928 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-trusted-ca\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386946 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m494d\" (UniqueName: \"kubernetes.io/projected/8278af76-59f6-440c-a724-ee73498ea89f-kube-api-access-m494d\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386970 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.386990 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7hhh\" (UniqueName: \"kubernetes.io/projected/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-kube-api-access-w7hhh\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387016 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-default-certificate\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387044 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387198 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/210c4d43-9381-4d14-a0df-dfaa770fc67c-images\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387235 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba155bca-f84f-4349-9384-03d3fcdb8de0-config\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387264 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-csi-data-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387281 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/210c4d43-9381-4d14-a0df-dfaa770fc67c-proxy-tls\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387304 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ced04116-5acd-4171-934a-5a92cbd8a4aa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387319 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387335 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qpx\" (UniqueName: \"kubernetes.io/projected/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-kube-api-access-k4qpx\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387365 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387399 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-service-ca\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387416 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394a349c-92b8-437a-910c-013d3da3b144-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387439 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387461 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8278af76-59f6-440c-a724-ee73498ea89f-service-ca-bundle\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.387483 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9hb\" (UniqueName: \"kubernetes.io/projected/543f3390-f981-4d07-bbaa-2139dd4eb2e2-kube-api-access-9z9hb\") pod \"migrator-59844c95c7-wphsb\" (UID: \"543f3390-f981-4d07-bbaa-2139dd4eb2e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.388094 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-service-ca\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.389329 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c8182cd-5593-4989-a633-74f2115ed6b5-config\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.395429 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-tls\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.397048 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-certificates\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.398434 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-metrics-certs\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.399006 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.399144 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8278af76-59f6-440c-a724-ee73498ea89f-service-ca-bundle\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.399279 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kpp2s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.401379 4660 request.go:700] Waited for 1.020656461s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.402805 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.404753 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-trusted-ca\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.404989 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.409034 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c8182cd-5593-4989-a633-74f2115ed6b5-etcd-client\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.409499 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.415330 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-default-certificate\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.420524 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c8182cd-5593-4989-a633-74f2115ed6b5-serving-cert\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.420976 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: W1129 07:17:44.422303 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2998d6f_01b6_4b4a_a5ca_44412d764e16.slice/crio-271a52755bc2d0d9916182ef87571a38ddc1ca5e979af03909fc1a743e467642 WatchSource:0}: Error finding container 271a52755bc2d0d9916182ef87571a38ddc1ca5e979af03909fc1a743e467642: Status 404 returned error can't find the container with id 271a52755bc2d0d9916182ef87571a38ddc1ca5e979af03909fc1a743e467642 Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.424167 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8278af76-59f6-440c-a724-ee73498ea89f-stats-auth\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:44 crc kubenswrapper[4660]: W1129 07:17:44.433914 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6f0474a_87d7_45e8_8bd4_036610a71240.slice/crio-194c77f36c98ffae3bf936d5e0a035e6b84616e7ea6255c8662fdea8a30f1cbb WatchSource:0}: Error finding container 194c77f36c98ffae3bf936d5e0a035e6b84616e7ea6255c8662fdea8a30f1cbb: Status 404 returned error can't find the container with id 194c77f36c98ffae3bf936d5e0a035e6b84616e7ea6255c8662fdea8a30f1cbb Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.438458 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.457389 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.478163 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.487050 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fk72t"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.496933 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499387 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499586 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6f44f78-e884-4407-872e-ca5d29e061e9-metrics-tls\") pod \"dns-operator-744455d44c-v727f\" (UID: \"a6f44f78-e884-4407-872e-ca5d29e061e9\") " pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499650 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-plugins-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499671 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zxv6\" (UniqueName: \"kubernetes.io/projected/58979674-f1a9-45e9-9dbe-83b07b421682-kube-api-access-7zxv6\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499695 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-socket-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.499766 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:44.999735314 +0000 UTC m=+155.553265223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499828 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-registration-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499867 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0decbd-7060-4501-b70c-88462984d70c-config\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499894 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba155bca-f84f-4349-9384-03d3fcdb8de0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499895 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-plugins-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499908 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-socket-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499921 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.499977 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2002833a-4b23-4192-83d1-dd00a412504e-cert\") pod \"ingress-canary-tm95v\" (UID: \"2002833a-4b23-4192-83d1-dd00a412504e\") " pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500004 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmzsv\" (UniqueName: \"kubernetes.io/projected/1a2b044c-0f98-459f-99d3-e836134cf09b-kube-api-access-dmzsv\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500026 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-srv-cert\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500050 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-signing-key\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500157 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ht6r\" (UniqueName: \"kubernetes.io/projected/a6fc6ac1-6b93-4e45-a741-9df933ea2d11-kube-api-access-7ht6r\") pod \"control-plane-machine-set-operator-78cbb6b69f-znn4f\" (UID: \"a6fc6ac1-6b93-4e45-a741-9df933ea2d11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500185 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0decbd-7060-4501-b70c-88462984d70c-serving-cert\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500211 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4kt\" (UniqueName: \"kubernetes.io/projected/74fd06c4-6eb8-4056-ba52-e1260a0d4058-kube-api-access-vs4kt\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500229 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnqcw\" (UniqueName: \"kubernetes.io/projected/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-kube-api-access-hnqcw\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500246 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/210c4d43-9381-4d14-a0df-dfaa770fc67c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500263 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74fd06c4-6eb8-4056-ba52-e1260a0d4058-config-volume\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500281 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d053433c-aa27-47d1-81f8-03595088a40f-config-volume\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394a349c-92b8-437a-910c-013d3da3b144-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500320 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500327 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-registration-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500335 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqt2\" (UniqueName: \"kubernetes.io/projected/d053433c-aa27-47d1-81f8-03595088a40f-kube-api-access-shqt2\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500577 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qccwb\" (UniqueName: \"kubernetes.io/projected/de70bbd5-2757-4733-9617-51928ad8c363-kube-api-access-qccwb\") pod \"package-server-manager-789f6589d5-6jxb6\" (UID: \"de70bbd5-2757-4733-9617-51928ad8c363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500683 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a2b044c-0f98-459f-99d3-e836134cf09b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500796 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8lx2\" (UniqueName: \"kubernetes.io/projected/1eca38a1-1f85-4651-93d7-d6fa8294920a-kube-api-access-p8lx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500832 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69jm\" (UniqueName: \"kubernetes.io/projected/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-kube-api-access-m69jm\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500868 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500898 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394a349c-92b8-437a-910c-013d3da3b144-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.500970 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/210c4d43-9381-4d14-a0df-dfaa770fc67c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501011 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a2b044c-0f98-459f-99d3-e836134cf09b-proxy-tls\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501040 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced04116-5acd-4171-934a-5a92cbd8a4aa-config\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501378 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a2b044c-0f98-459f-99d3-e836134cf09b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501402 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wccgn\" (UniqueName: \"kubernetes.io/projected/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-kube-api-access-wccgn\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501429 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1c512614-b5a1-47e5-8779-cc31e225150c-certs\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501447 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-signing-cabundle\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501464 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/119beed4-7907-454f-99fc-5a3fc04f7484-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-st7q5\" (UID: \"119beed4-7907-454f-99fc-5a3fc04f7484\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501482 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/de70bbd5-2757-4733-9617-51928ad8c363-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6jxb6\" (UID: \"de70bbd5-2757-4733-9617-51928ad8c363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501499 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jp28\" (UniqueName: \"kubernetes.io/projected/53865d66-0a9f-48e8-aef3-c487db9538f2-kube-api-access-5jp28\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501519 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88w72\" (UniqueName: \"kubernetes.io/projected/2002833a-4b23-4192-83d1-dd00a412504e-kube-api-access-88w72\") pod \"ingress-canary-tm95v\" (UID: \"2002833a-4b23-4192-83d1-dd00a412504e\") " pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501547 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eca38a1-1f85-4651-93d7-d6fa8294920a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501575 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1c512614-b5a1-47e5-8779-cc31e225150c-node-bootstrap-token\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501596 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d053433c-aa27-47d1-81f8-03595088a40f-metrics-tls\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501663 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501688 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7hhh\" (UniqueName: \"kubernetes.io/projected/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-kube-api-access-w7hhh\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501716 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/210c4d43-9381-4d14-a0df-dfaa770fc67c-images\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501737 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba155bca-f84f-4349-9384-03d3fcdb8de0-config\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501759 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-apiservice-cert\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501780 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr48c\" (UniqueName: \"kubernetes.io/projected/119beed4-7907-454f-99fc-5a3fc04f7484-kube-api-access-mr48c\") pod \"multus-admission-controller-857f4d67dd-st7q5\" (UID: \"119beed4-7907-454f-99fc-5a3fc04f7484\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501805 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-csi-data-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501827 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/210c4d43-9381-4d14-a0df-dfaa770fc67c-proxy-tls\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501848 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53865d66-0a9f-48e8-aef3-c487db9538f2-metrics-tls\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501867 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-tmpfs\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501893 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ced04116-5acd-4171-934a-5a92cbd8a4aa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501911 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501927 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4qpx\" (UniqueName: \"kubernetes.io/projected/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-kube-api-access-k4qpx\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501943 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74fd06c4-6eb8-4056-ba52-e1260a0d4058-secret-volume\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501960 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.501983 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-webhook-cert\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502028 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53865d66-0a9f-48e8-aef3-c487db9538f2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502058 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394a349c-92b8-437a-910c-013d3da3b144-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502086 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9hb\" (UniqueName: \"kubernetes.io/projected/543f3390-f981-4d07-bbaa-2139dd4eb2e2-kube-api-access-9z9hb\") pod \"migrator-59844c95c7-wphsb\" (UID: \"543f3390-f981-4d07-bbaa-2139dd4eb2e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502108 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klx9d\" (UniqueName: \"kubernetes.io/projected/1c512614-b5a1-47e5-8779-cc31e225150c-kube-api-access-klx9d\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502137 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2lxx\" (UniqueName: \"kubernetes.io/projected/a6f44f78-e884-4407-872e-ca5d29e061e9-kube-api-access-l2lxx\") pod \"dns-operator-744455d44c-v727f\" (UID: \"a6f44f78-e884-4407-872e-ca5d29e061e9\") " pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502161 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tld9\" (UniqueName: \"kubernetes.io/projected/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-kube-api-access-4tld9\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502182 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eca38a1-1f85-4651-93d7-d6fa8294920a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502200 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v56q\" (UniqueName: \"kubernetes.io/projected/210c4d43-9381-4d14-a0df-dfaa770fc67c-kube-api-access-6v56q\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502218 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5dq\" (UniqueName: \"kubernetes.io/projected/cf0decbd-7060-4501-b70c-88462984d70c-kube-api-access-rs5dq\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502237 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53865d66-0a9f-48e8-aef3-c487db9538f2-trusted-ca\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502255 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6fc6ac1-6b93-4e45-a741-9df933ea2d11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-znn4f\" (UID: \"a6fc6ac1-6b93-4e45-a741-9df933ea2d11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502273 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-srv-cert\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502290 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ced04116-5acd-4171-934a-5a92cbd8a4aa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502307 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba155bca-f84f-4349-9384-03d3fcdb8de0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502328 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502346 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-mountpoint-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502421 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-mountpoint-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502022 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced04116-5acd-4171-934a-5a92cbd8a4aa-config\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.502752 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/210c4d43-9381-4d14-a0df-dfaa770fc67c-images\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.503425 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba155bca-f84f-4349-9384-03d3fcdb8de0-config\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.503431 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394a349c-92b8-437a-910c-013d3da3b144-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.503573 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/58979674-f1a9-45e9-9dbe-83b07b421682-csi-data-dir\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.504076 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eca38a1-1f85-4651-93d7-d6fa8294920a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.504309 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.00429742 +0000 UTC m=+155.557827319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.508086 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eca38a1-1f85-4651-93d7-d6fa8294920a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.508318 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a2b044c-0f98-459f-99d3-e836134cf09b-proxy-tls\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.509400 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394a349c-92b8-437a-910c-013d3da3b144-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.509890 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ced04116-5acd-4171-934a-5a92cbd8a4aa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.516165 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.535598 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.545645 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/210c4d43-9381-4d14-a0df-dfaa770fc67c-proxy-tls\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.558069 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.576373 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.599391 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605085 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.605300 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.105269253 +0000 UTC m=+155.658799152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605356 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jp28\" (UniqueName: \"kubernetes.io/projected/53865d66-0a9f-48e8-aef3-c487db9538f2-kube-api-access-5jp28\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605390 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/119beed4-7907-454f-99fc-5a3fc04f7484-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-st7q5\" (UID: \"119beed4-7907-454f-99fc-5a3fc04f7484\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605425 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1c512614-b5a1-47e5-8779-cc31e225150c-node-bootstrap-token\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605449 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d053433c-aa27-47d1-81f8-03595088a40f-metrics-tls\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605514 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-apiservice-cert\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605538 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr48c\" (UniqueName: \"kubernetes.io/projected/119beed4-7907-454f-99fc-5a3fc04f7484-kube-api-access-mr48c\") pod \"multus-admission-controller-857f4d67dd-st7q5\" (UID: \"119beed4-7907-454f-99fc-5a3fc04f7484\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605583 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53865d66-0a9f-48e8-aef3-c487db9538f2-metrics-tls\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605603 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-tmpfs\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605742 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74fd06c4-6eb8-4056-ba52-e1260a0d4058-secret-volume\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605771 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-webhook-cert\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605795 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53865d66-0a9f-48e8-aef3-c487db9538f2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605829 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klx9d\" (UniqueName: \"kubernetes.io/projected/1c512614-b5a1-47e5-8779-cc31e225150c-kube-api-access-klx9d\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605860 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2lxx\" (UniqueName: \"kubernetes.io/projected/a6f44f78-e884-4407-872e-ca5d29e061e9-kube-api-access-l2lxx\") pod \"dns-operator-744455d44c-v727f\" (UID: \"a6f44f78-e884-4407-872e-ca5d29e061e9\") " pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605918 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53865d66-0a9f-48e8-aef3-c487db9538f2-trusted-ca\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605942 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6fc6ac1-6b93-4e45-a741-9df933ea2d11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-znn4f\" (UID: \"a6fc6ac1-6b93-4e45-a741-9df933ea2d11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.605975 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606008 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6f44f78-e884-4407-872e-ca5d29e061e9-metrics-tls\") pod \"dns-operator-744455d44c-v727f\" (UID: \"a6f44f78-e884-4407-872e-ca5d29e061e9\") " pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606063 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606112 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ht6r\" (UniqueName: \"kubernetes.io/projected/a6fc6ac1-6b93-4e45-a741-9df933ea2d11-kube-api-access-7ht6r\") pod \"control-plane-machine-set-operator-78cbb6b69f-znn4f\" (UID: \"a6fc6ac1-6b93-4e45-a741-9df933ea2d11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606153 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4kt\" (UniqueName: \"kubernetes.io/projected/74fd06c4-6eb8-4056-ba52-e1260a0d4058-kube-api-access-vs4kt\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606176 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnqcw\" (UniqueName: \"kubernetes.io/projected/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-kube-api-access-hnqcw\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606201 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74fd06c4-6eb8-4056-ba52-e1260a0d4058-config-volume\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606222 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d053433c-aa27-47d1-81f8-03595088a40f-config-volume\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606255 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606281 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqt2\" (UniqueName: \"kubernetes.io/projected/d053433c-aa27-47d1-81f8-03595088a40f-kube-api-access-shqt2\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606326 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8lx2\" (UniqueName: \"kubernetes.io/projected/1eca38a1-1f85-4651-93d7-d6fa8294920a-kube-api-access-p8lx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606348 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m69jm\" (UniqueName: \"kubernetes.io/projected/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-kube-api-access-m69jm\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.606381 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1c512614-b5a1-47e5-8779-cc31e225150c-certs\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.607694 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53865d66-0a9f-48e8-aef3-c487db9538f2-trusted-ca\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.607999 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.107985968 +0000 UTC m=+155.661515877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.613450 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.614663 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-tmpfs\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.615452 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6f44f78-e884-4407-872e-ca5d29e061e9-metrics-tls\") pod \"dns-operator-744455d44c-v727f\" (UID: \"a6f44f78-e884-4407-872e-ca5d29e061e9\") " pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.618487 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.618568 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.618698 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53865d66-0a9f-48e8-aef3-c487db9538f2-metrics-tls\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.634233 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/119beed4-7907-454f-99fc-5a3fc04f7484-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-st7q5\" (UID: \"119beed4-7907-454f-99fc-5a3fc04f7484\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.636541 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.656798 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.665413 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d053433c-aa27-47d1-81f8-03595088a40f-config-volume\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.680455 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.697359 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.708400 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.709143 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.209122536 +0000 UTC m=+155.762652445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.713862 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.717375 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d053433c-aa27-47d1-81f8-03595088a40f-metrics-tls\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.717601 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.732976 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dwgp"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.740090 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.756997 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.758545 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/de70bbd5-2757-4733-9617-51928ad8c363-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6jxb6\" (UID: \"de70bbd5-2757-4733-9617-51928ad8c363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.777147 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.794452 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba155bca-f84f-4349-9384-03d3fcdb8de0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.799065 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.810412 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.810753 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.310743056 +0000 UTC m=+155.864272955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.816272 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.821819 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7j5ts"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.828798 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.846077 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.854506 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.857813 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.858632 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm8tt"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.870282 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" event={"ID":"d707878c-74ce-4dc3-88a0-84845ff53208","Type":"ContainerStarted","Data":"51ba249995ee58ed7662eae22672ddb70cf8c221034bf27d52586d1a5d6da4a7"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.879740 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.880923 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" event={"ID":"5e7f96c6-f025-4afa-98c2-be96b842ce15","Type":"ContainerStarted","Data":"929ed77e31a6d28f16c1d23040fcda7635e2d8b8e0291cac9aff2a2929c1aa3c"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.889526 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" event={"ID":"3069d78e-6be2-46bf-baae-bbe2ccf0b06b","Type":"ContainerStarted","Data":"cb368e7f6e1e2e3544d6a9c7babcb38204cb960ee3a595accd1347a7dd6a6de7"} Nov 29 07:17:44 crc kubenswrapper[4660]: W1129 07:17:44.889657 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6051c490_f396_4257_a4f8_e0c8a1bcf910.slice/crio-bf05727a23408b9b1deea7c6f9cd991decfe09cd0dcacee49ded3a24ff39d47b WatchSource:0}: Error finding container bf05727a23408b9b1deea7c6f9cd991decfe09cd0dcacee49ded3a24ff39d47b: Status 404 returned error can't find the container with id bf05727a23408b9b1deea7c6f9cd991decfe09cd0dcacee49ded3a24ff39d47b Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.890780 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fk72t" event={"ID":"206c7efc-d3fd-4650-b1fc-89602cff0109","Type":"ContainerStarted","Data":"864dbae33e36e6c40699effe78ca9a162214280efe904fad4600c66c041b4098"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.891830 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" event={"ID":"dac3d607-3725-4a88-95f8-dca21e0bd0e1","Type":"ContainerStarted","Data":"514e4bf57e1ebe7368b63cd547b95703f5d174b4f801c70bdf4b8d916de5ceb7"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.891850 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" event={"ID":"dac3d607-3725-4a88-95f8-dca21e0bd0e1","Type":"ContainerStarted","Data":"ff186239a3a332b6000eab626dc6e2f61e1afaf9afeb8453dffa370f79554680"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.893094 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" event={"ID":"b6f0474a-87d7-45e8-8bd4-036610a71240","Type":"ContainerStarted","Data":"194c77f36c98ffae3bf936d5e0a035e6b84616e7ea6255c8662fdea8a30f1cbb"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.897464 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.903291 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" event={"ID":"aacf3710-663f-4cfa-aa89-7bbc848e094d","Type":"ContainerStarted","Data":"2adda2a8771cb1f2797ace1973a84247ff9cdf13e1bf7e6039647411e024886a"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.903331 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" event={"ID":"aacf3710-663f-4cfa-aa89-7bbc848e094d","Type":"ContainerStarted","Data":"3f7359020ba06b345d58af525068706894c2c81dc19af0a1d012d53d6fde03b4"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.906128 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" event={"ID":"133a42bf-5cdf-4614-8a42-4ce3e350481e","Type":"ContainerStarted","Data":"471a7612ba18e75037dea59112c1c468875087b090a8ca4ea969b328b0916f69"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.906826 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-signing-key\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.906902 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" event={"ID":"a2998d6f-01b6-4b4a-a5ca-44412d764e16","Type":"ContainerStarted","Data":"271a52755bc2d0d9916182ef87571a38ddc1ca5e979af03909fc1a743e467642"} Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.913413 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:44 crc kubenswrapper[4660]: E1129 07:17:44.914184 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.414166007 +0000 UTC m=+155.967695916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.916672 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.944026 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.957012 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tmccw"] Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.957679 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.964396 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-signing-cabundle\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:44 crc kubenswrapper[4660]: I1129 07:17:44.979172 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.001863 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.015423 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.017124 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0decbd-7060-4501-b70c-88462984d70c-serving-cert\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.017696 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.51767382 +0000 UTC m=+156.071203759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.017815 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.021473 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0decbd-7060-4501-b70c-88462984d70c-config\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.042738 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.059767 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.075995 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.096177 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.107651 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8qjn8"] Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.116225 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.116811 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.616794932 +0000 UTC m=+156.170324831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.117551 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.121959 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6fc6ac1-6b93-4e45-a741-9df933ea2d11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-znn4f\" (UID: \"a6fc6ac1-6b93-4e45-a741-9df933ea2d11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.139935 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.146452 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.149265 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.155804 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74fd06c4-6eb8-4056-ba52-e1260a0d4058-secret-volume\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.156266 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.166350 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-srv-cert\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.178510 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.188751 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kpp2s"] Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.189135 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-srv-cert\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.191501 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb"] Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.201428 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:17:45 crc kubenswrapper[4660]: W1129 07:17:45.202673 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9330cc5_8397_4c11_9ba6_764f28128d7b.slice/crio-535a0d33bcdfcf3730939f466f527b56299e583577a6e2056c809e09a59af8e9 WatchSource:0}: Error finding container 535a0d33bcdfcf3730939f466f527b56299e583577a6e2056c809e09a59af8e9: Status 404 returned error can't find the container with id 535a0d33bcdfcf3730939f466f527b56299e583577a6e2056c809e09a59af8e9 Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.204767 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74fd06c4-6eb8-4056-ba52-e1260a0d4058-config-volume\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.216350 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.217320 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.217793 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.717781765 +0000 UTC m=+156.271311664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.242080 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.265906 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.270136 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-apiservice-cert\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.270891 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-webhook-cert\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.270968 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1c512614-b5a1-47e5-8779-cc31e225150c-node-bootstrap-token\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.277541 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.290456 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1c512614-b5a1-47e5-8779-cc31e225150c-certs\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.297579 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.316005 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.318136 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.318271 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.818254495 +0000 UTC m=+156.371784394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.318417 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.318713 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.818705637 +0000 UTC m=+156.372235536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.336101 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.356448 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.376233 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.393017 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2002833a-4b23-4192-83d1-dd00a412504e-cert\") pod \"ingress-canary-tm95v\" (UID: \"2002833a-4b23-4192-83d1-dd00a412504e\") " pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.397111 4660 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.421379 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.421943 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:45.921922882 +0000 UTC m=+156.475452781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.421994 4660 request.go:700] Waited for 1.974858081s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.424132 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.436741 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.497483 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpp55\" (UniqueName: \"kubernetes.io/projected/6c8182cd-5593-4989-a633-74f2115ed6b5-kube-api-access-zpp55\") pod \"etcd-operator-b45778765-zzkzd\" (UID: \"6c8182cd-5593-4989-a633-74f2115ed6b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.523194 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.523639 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.023626414 +0000 UTC m=+156.577156313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.533778 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-bound-sa-token\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.550190 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvx5\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-kube-api-access-7vvx5\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.551518 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m494d\" (UniqueName: \"kubernetes.io/projected/8278af76-59f6-440c-a724-ee73498ea89f-kube-api-access-m494d\") pod \"router-default-5444994796-rbqps\" (UID: \"8278af76-59f6-440c-a724-ee73498ea89f\") " pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.570287 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zxv6\" (UniqueName: \"kubernetes.io/projected/58979674-f1a9-45e9-9dbe-83b07b421682-kube-api-access-7zxv6\") pod \"csi-hostpathplugin-smxsv\" (UID: \"58979674-f1a9-45e9-9dbe-83b07b421682\") " pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.589863 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmzsv\" (UniqueName: \"kubernetes.io/projected/1a2b044c-0f98-459f-99d3-e836134cf09b-kube-api-access-dmzsv\") pod \"machine-config-controller-84d6567774-stqbv\" (UID: \"1a2b044c-0f98-459f-99d3-e836134cf09b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.610541 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qccwb\" (UniqueName: \"kubernetes.io/projected/de70bbd5-2757-4733-9617-51928ad8c363-kube-api-access-qccwb\") pod \"package-server-manager-789f6589d5-6jxb6\" (UID: \"de70bbd5-2757-4733-9617-51928ad8c363\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.623970 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.624399 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.124384251 +0000 UTC m=+156.677914150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.633736 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/394a349c-92b8-437a-910c-013d3da3b144-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5bf2g\" (UID: \"394a349c-92b8-437a-910c-013d3da3b144\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.654178 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wccgn\" (UniqueName: \"kubernetes.io/projected/db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026-kube-api-access-wccgn\") pod \"catalog-operator-68c6474976-2jxgk\" (UID: \"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.670988 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88w72\" (UniqueName: \"kubernetes.io/projected/2002833a-4b23-4192-83d1-dd00a412504e-kube-api-access-88w72\") pod \"ingress-canary-tm95v\" (UID: \"2002833a-4b23-4192-83d1-dd00a412504e\") " pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.691018 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7hhh\" (UniqueName: \"kubernetes.io/projected/fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b-kube-api-access-w7hhh\") pod \"service-ca-9c57cc56f-5nhqp\" (UID: \"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b\") " pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.691259 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tm95v" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.709947 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.715286 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ced04116-5acd-4171-934a-5a92cbd8a4aa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gvr6s\" (UID: \"ced04116-5acd-4171-934a-5a92cbd8a4aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.725786 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.726210 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.226195958 +0000 UTC m=+156.779725857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.751102 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5dq\" (UniqueName: \"kubernetes.io/projected/cf0decbd-7060-4501-b70c-88462984d70c-kube-api-access-rs5dq\") pod \"service-ca-operator-777779d784-s945x\" (UID: \"cf0decbd-7060-4501-b70c-88462984d70c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.765938 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.771735 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9hb\" (UniqueName: \"kubernetes.io/projected/543f3390-f981-4d07-bbaa-2139dd4eb2e2-kube-api-access-9z9hb\") pod \"migrator-59844c95c7-wphsb\" (UID: \"543f3390-f981-4d07-bbaa-2139dd4eb2e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.773318 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.797660 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tld9\" (UniqueName: \"kubernetes.io/projected/7a202942-1c6b-4ae3-abd2-acfedf5c76a9-kube-api-access-4tld9\") pod \"olm-operator-6b444d44fb-lfxc4\" (UID: \"7a202942-1c6b-4ae3-abd2-acfedf5c76a9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.803964 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4qpx\" (UniqueName: \"kubernetes.io/projected/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-kube-api-access-k4qpx\") pod \"marketplace-operator-79b997595-974tz\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.822497 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v56q\" (UniqueName: \"kubernetes.io/projected/210c4d43-9381-4d14-a0df-dfaa770fc67c-kube-api-access-6v56q\") pod \"machine-config-operator-74547568cd-hwmtw\" (UID: \"210c4d43-9381-4d14-a0df-dfaa770fc67c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.826585 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.826945 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.326926824 +0000 UTC m=+156.880456723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.827712 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.837551 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.844094 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba155bca-f84f-4349-9384-03d3fcdb8de0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xq9vp\" (UID: \"ba155bca-f84f-4349-9384-03d3fcdb8de0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.847146 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.871220 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr48c\" (UniqueName: \"kubernetes.io/projected/119beed4-7907-454f-99fc-5a3fc04f7484-kube-api-access-mr48c\") pod \"multus-admission-controller-857f4d67dd-st7q5\" (UID: \"119beed4-7907-454f-99fc-5a3fc04f7484\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.871946 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.874175 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.883843 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.900781 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.908944 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.910074 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2lxx\" (UniqueName: \"kubernetes.io/projected/a6f44f78-e884-4407-872e-ca5d29e061e9-kube-api-access-l2lxx\") pod \"dns-operator-744455d44c-v727f\" (UID: \"a6f44f78-e884-4407-872e-ca5d29e061e9\") " pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.917486 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.940568 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ht6r\" (UniqueName: \"kubernetes.io/projected/a6fc6ac1-6b93-4e45-a741-9df933ea2d11-kube-api-access-7ht6r\") pod \"control-plane-machine-set-operator-78cbb6b69f-znn4f\" (UID: \"a6fc6ac1-6b93-4e45-a741-9df933ea2d11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.942896 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.943384 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.944179 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.944782 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:45 crc kubenswrapper[4660]: E1129 07:17:45.944944 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.444920776 +0000 UTC m=+156.998450675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.953147 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.954227 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4kt\" (UniqueName: \"kubernetes.io/projected/74fd06c4-6eb8-4056-ba52-e1260a0d4058-kube-api-access-vs4kt\") pod \"collect-profiles-29406675-pxtj7\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.962942 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.971827 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnqcw\" (UniqueName: \"kubernetes.io/projected/5f6c877c-fa26-422c-8ddc-3b8c2bd633fe-kube-api-access-hnqcw\") pod \"kube-storage-version-migrator-operator-b67b599dd-85vhc\" (UID: \"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.974897 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" event={"ID":"d707878c-74ce-4dc3-88a0-84845ff53208","Type":"ContainerStarted","Data":"1d4b46619e283c1ad7a8e08202b94880041757235243a81ee4a963fed151204f"} Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.979522 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" event={"ID":"133a42bf-5cdf-4614-8a42-4ce3e350481e","Type":"ContainerStarted","Data":"9b0d451e8f250d63d2e87ba34db4c0c5ba7ae88c0d95bf092049ae5a3092a987"} Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.981042 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" event={"ID":"b6f0474a-87d7-45e8-8bd4-036610a71240","Type":"ContainerStarted","Data":"0767394d2a786a7cc89c9eca9e550c95d5cb7d127b0ca8e63aed6a91d3378d20"} Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.983424 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jp28\" (UniqueName: \"kubernetes.io/projected/53865d66-0a9f-48e8-aef3-c487db9538f2-kube-api-access-5jp28\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.984032 4660 generic.go:334] "Generic (PLEG): container finished" podID="33653b7e-b48e-447a-84ed-a21dc8b827ac" containerID="4b07a097b4c31024ca444f6880c3fb0d44cf146a4bc7d081c51ed2cd2be7c013" exitCode=0 Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.984115 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" event={"ID":"33653b7e-b48e-447a-84ed-a21dc8b827ac","Type":"ContainerDied","Data":"4b07a097b4c31024ca444f6880c3fb0d44cf146a4bc7d081c51ed2cd2be7c013"} Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.984136 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" event={"ID":"33653b7e-b48e-447a-84ed-a21dc8b827ac","Type":"ContainerStarted","Data":"5b7c286a8f7b46f771bf1ef735def1350e8e1f813d6071a846c8a0d87a1af7c1"} Nov 29 07:17:45 crc kubenswrapper[4660]: I1129 07:17:45.996533 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53865d66-0a9f-48e8-aef3-c487db9538f2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mktzj\" (UID: \"53865d66-0a9f-48e8-aef3-c487db9538f2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.009515 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" event={"ID":"6051c490-f396-4257-a4f8-e0c8a1bcf910","Type":"ContainerStarted","Data":"05ccc0f3dce711c727ee6acba8d0c57b8d0cc002fe99de83ed3fe432c9d8261c"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.009567 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" event={"ID":"6051c490-f396-4257-a4f8-e0c8a1bcf910","Type":"ContainerStarted","Data":"bf05727a23408b9b1deea7c6f9cd991decfe09cd0dcacee49ded3a24ff39d47b"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.012251 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klx9d\" (UniqueName: \"kubernetes.io/projected/1c512614-b5a1-47e5-8779-cc31e225150c-kube-api-access-klx9d\") pod \"machine-config-server-g2lfb\" (UID: \"1c512614-b5a1-47e5-8779-cc31e225150c\") " pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.012762 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.014001 4660 generic.go:334] "Generic (PLEG): container finished" podID="a2998d6f-01b6-4b4a-a5ca-44412d764e16" containerID="3d784c161a393276facddc14ad5bc72506ba486b54954cdb608264fe3db2b262" exitCode=0 Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.014242 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" event={"ID":"a2998d6f-01b6-4b4a-a5ca-44412d764e16","Type":"ContainerDied","Data":"3d784c161a393276facddc14ad5bc72506ba486b54954cdb608264fe3db2b262"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.029021 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rbqps" event={"ID":"8278af76-59f6-440c-a724-ee73498ea89f","Type":"ContainerStarted","Data":"76d28b06328682d9f1750df76e86e57cc3f6f52b9a385cd0a758cdb8925351a9"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.040842 4660 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sm8tt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.040917 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" podUID="6051c490-f396-4257-a4f8-e0c8a1bcf910" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.042870 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" event={"ID":"3069d78e-6be2-46bf-baae-bbe2ccf0b06b","Type":"ContainerStarted","Data":"c70417ddc3b9603f65aefca39955d9c043718df8acea820cde0d5454e9f1a7a7"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.042903 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.046344 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.047281 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.547134963 +0000 UTC m=+157.100664862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.047824 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.048319 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.548293235 +0000 UTC m=+157.101823134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.048982 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fk72t" event={"ID":"206c7efc-d3fd-4650-b1fc-89602cff0109","Type":"ContainerStarted","Data":"87256d762a744e0c6fa376d13dae3e35585021f939651ca8e4f9014e1104e122"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.049686 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.050212 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-smxsv"] Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.053247 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m69jm\" (UniqueName: \"kubernetes.io/projected/5c3c5a57-1a1a-4d53-a68f-f74dd194382e-kube-api-access-m69jm\") pod \"packageserver-d55dfcdfc-bd2x2\" (UID: \"5c3c5a57-1a1a-4d53-a68f-f74dd194382e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.054773 4660 patch_prober.go:28] interesting pod/console-operator-58897d9998-fk72t container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.054811 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fk72t" podUID="206c7efc-d3fd-4650-b1fc-89602cff0109" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.051466 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8qjn8" event={"ID":"f46e1d0c-84fc-4518-9101-a64174cee99a","Type":"ContainerStarted","Data":"d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.060553 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8qjn8" event={"ID":"f46e1d0c-84fc-4518-9101-a64174cee99a","Type":"ContainerStarted","Data":"a5b8f776bc378c9b48f563ab34a1e64c5921f39855f1999c2c38aeabacb43ccf"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.060577 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" event={"ID":"e9330cc5-8397-4c11-9ba6-764f28128d7b","Type":"ContainerStarted","Data":"c184b078c15a7024183835e24a38ad42f1ee5b66feb7731b2fc6ec16c5f0f53d"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.060590 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" event={"ID":"e9330cc5-8397-4c11-9ba6-764f28128d7b","Type":"ContainerStarted","Data":"535a0d33bcdfcf3730939f466f527b56299e583577a6e2056c809e09a59af8e9"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.060601 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" event={"ID":"5e7f96c6-f025-4afa-98c2-be96b842ce15","Type":"ContainerStarted","Data":"81fa1b08713c62306e6595bc6c2429fef682d5d11979373cd451d3d7ace47a1c"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.060696 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kpp2s" event={"ID":"6fdca584-ca4e-44ea-b149-bf27b1896eca","Type":"ContainerStarted","Data":"5546eafd13314c7c232bb6214d589253964094cf6f88d04d89f55c3eb8e3c312"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.060708 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kpp2s" event={"ID":"6fdca584-ca4e-44ea-b149-bf27b1896eca","Type":"ContainerStarted","Data":"bc9a10f6e113b062e7af09617c37430d423dd66610538a43f0f4e2ae0073d7a3"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.062397 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" event={"ID":"058d4627-e6bf-4ce0-a769-846ddc9b6687","Type":"ContainerStarted","Data":"bdd6f2faa9dadb7380638708c616301c17f90db40d8f497aba0c7116c8cb4b5c"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.062422 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" event={"ID":"058d4627-e6bf-4ce0-a769-846ddc9b6687","Type":"ContainerStarted","Data":"3cb7d148a845acbc6dfa296d628f0289fb3d89dc91339ce3d382c933e1c34c79"} Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.063146 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.079105 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.079275 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8lx2\" (UniqueName: \"kubernetes.io/projected/1eca38a1-1f85-4651-93d7-d6fa8294920a-kube-api-access-p8lx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-jf77g\" (UID: \"1eca38a1-1f85-4651-93d7-d6fa8294920a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.083031 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.100895 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v727f" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.110131 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.120246 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.121694 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zzkzd"] Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.138308 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tm95v"] Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.152804 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.152940 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.652901799 +0000 UTC m=+157.206431698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.155387 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.160240 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.167968 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.667932303 +0000 UTC m=+157.221462272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.213118 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqt2\" (UniqueName: \"kubernetes.io/projected/d053433c-aa27-47d1-81f8-03595088a40f-kube-api-access-shqt2\") pod \"dns-default-sggfx\" (UID: \"d053433c-aa27-47d1-81f8-03595088a40f\") " pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.267734 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.270055 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.770024706 +0000 UTC m=+157.323554785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.271059 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.293507 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g2lfb" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.296322 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s"] Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.372460 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.372819 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.87280484 +0000 UTC m=+157.426334739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.473641 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.473794 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.973775733 +0000 UTC m=+157.527305632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.473888 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.474239 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:46.974225935 +0000 UTC m=+157.527755834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.492553 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.521344 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g"] Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.576159 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.576468 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.076453372 +0000 UTC m=+157.629983261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: W1129 07:17:46.590772 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podced04116_5acd_4171_934a_5a92cbd8a4aa.slice/crio-4eb8b42b44a1171babb372036aa1622b5db373130b5782464a9df7e7e1613720 WatchSource:0}: Error finding container 4eb8b42b44a1171babb372036aa1622b5db373130b5782464a9df7e7e1613720: Status 404 returned error can't find the container with id 4eb8b42b44a1171babb372036aa1622b5db373130b5782464a9df7e7e1613720 Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.594491 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-st7q5"] Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.625921 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.677324 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.678058 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.178043093 +0000 UTC m=+157.731572992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.739040 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv"] Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.778421 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.778759 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.278736968 +0000 UTC m=+157.832266867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.895483 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:46 crc kubenswrapper[4660]: E1129 07:17:46.899522 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.399508776 +0000 UTC m=+157.953038675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:46 crc kubenswrapper[4660]: I1129 07:17:46.999814 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.000541 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.500222472 +0000 UTC m=+158.053752391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.085803 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" event={"ID":"119beed4-7907-454f-99fc-5a3fc04f7484","Type":"ContainerStarted","Data":"80d6b6a2532b8c606fa19836fe1f9fdabfbf26aac8bae97cf5d50863aaddd1db"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.104106 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.104458 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.604442974 +0000 UTC m=+158.157972883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.106161 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" event={"ID":"58979674-f1a9-45e9-9dbe-83b07b421682","Type":"ContainerStarted","Data":"4c4808f0ae802f54de5b5e2192dd95174d51fb15a39d8a9d476f0b21c8a4e9ca"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.164594 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" event={"ID":"133a42bf-5cdf-4614-8a42-4ce3e350481e","Type":"ContainerStarted","Data":"82ea9189c6021a109002c15051b4d3e671573f3eb7e02d33f6c80817eab248ac"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.175323 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" event={"ID":"6c8182cd-5593-4989-a633-74f2115ed6b5","Type":"ContainerStarted","Data":"2ef86eaf9682da7bab9e9e5cead2023a9439287b2489f5024d7dac6e18bab62b"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.195386 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" event={"ID":"394a349c-92b8-437a-910c-013d3da3b144","Type":"ContainerStarted","Data":"9dc9bc9cd98f726395a641270beac79b8ad8915336c1602010163180cf520898"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.208747 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.209253 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.709235713 +0000 UTC m=+158.262765612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.211109 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tm95v" event={"ID":"2002833a-4b23-4192-83d1-dd00a412504e","Type":"ContainerStarted","Data":"859c1bec7c228c3fffb397d3a5719d24bf753f8db0520041df393f42597b558e"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.212677 4660 generic.go:334] "Generic (PLEG): container finished" podID="e9330cc5-8397-4c11-9ba6-764f28128d7b" containerID="c184b078c15a7024183835e24a38ad42f1ee5b66feb7731b2fc6ec16c5f0f53d" exitCode=0 Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.212719 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" event={"ID":"e9330cc5-8397-4c11-9ba6-764f28128d7b","Type":"ContainerDied","Data":"c184b078c15a7024183835e24a38ad42f1ee5b66feb7731b2fc6ec16c5f0f53d"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.215226 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g2lfb" event={"ID":"1c512614-b5a1-47e5-8779-cc31e225150c","Type":"ContainerStarted","Data":"d04c9db95d9dbfa4dd2a78a371968ea1d7294f9362a24c22bca40febae5783a1"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.216405 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" event={"ID":"ced04116-5acd-4171-934a-5a92cbd8a4aa","Type":"ContainerStarted","Data":"4eb8b42b44a1171babb372036aa1622b5db373130b5782464a9df7e7e1613720"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.218325 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" event={"ID":"1a2b044c-0f98-459f-99d3-e836134cf09b","Type":"ContainerStarted","Data":"3e4c9c8f9f974085f83051890ea2b2ff6326de8bfd409c332f745458a3e847ed"} Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.224024 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kpp2s" Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.241145 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.241196 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kpp2s" podUID="6fdca584-ca4e-44ea-b149-bf27b1896eca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.243246 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.312695 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.344600 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.844579423 +0000 UTC m=+158.398109322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.348274 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.413693 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.414361 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:47.914344717 +0000 UTC m=+158.467874616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.491636 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5nhqp"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.493185 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.515573 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.516562 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.016543873 +0000 UTC m=+158.570073772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.578498 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.616509 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-974tz"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.617036 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.617296 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.11728146 +0000 UTC m=+158.670811359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.649682 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.723015 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.725075 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.22506006 +0000 UTC m=+158.778589959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.826018 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.826258 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.326244219 +0000 UTC m=+158.879774118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.826323 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s945x"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.826347 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.826359 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7"] Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.880380 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8qjn8" podStartSLOduration=135.880364621 podStartE2EDuration="2m15.880364621s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:47.83646364 +0000 UTC m=+158.389993529" watchObservedRunningTime="2025-11-29 07:17:47.880364621 +0000 UTC m=+158.433894520" Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.905986 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fk72t" Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.915429 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" podStartSLOduration=135.915413537 podStartE2EDuration="2m15.915413537s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:47.91262713 +0000 UTC m=+158.466157049" watchObservedRunningTime="2025-11-29 07:17:47.915413537 +0000 UTC m=+158.468943436" Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.927129 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:47 crc kubenswrapper[4660]: E1129 07:17:47.927541 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.4275276 +0000 UTC m=+158.981057499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:47 crc kubenswrapper[4660]: I1129 07:17:47.971217 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5tdv4" podStartSLOduration=138.971195034 podStartE2EDuration="2m18.971195034s" podCreationTimestamp="2025-11-29 07:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:47.970381302 +0000 UTC m=+158.523911211" watchObservedRunningTime="2025-11-29 07:17:47.971195034 +0000 UTC m=+158.524724943" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.028162 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.028499 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.528481023 +0000 UTC m=+159.082010922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.048690 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" podStartSLOduration=136.04867254 podStartE2EDuration="2m16.04867254s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.01094624 +0000 UTC m=+158.564476149" watchObservedRunningTime="2025-11-29 07:17:48.04867254 +0000 UTC m=+158.602202439" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.054591 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4qwd8" podStartSLOduration=136.054564982 podStartE2EDuration="2m16.054564982s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.045993046 +0000 UTC m=+158.599522945" watchObservedRunningTime="2025-11-29 07:17:48.054564982 +0000 UTC m=+158.608094881" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.069774 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4"] Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.080233 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6"] Nov 29 07:17:48 crc kubenswrapper[4660]: W1129 07:17:48.106501 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb3f36ef_02fd_4a96_90c6_7d2f75d15a3b.slice/crio-4aa7c4808cdf6c275d8c85430ff31a2349d51e0f4487153ec526dad052a031af WatchSource:0}: Error finding container 4aa7c4808cdf6c275d8c85430ff31a2349d51e0f4487153ec526dad052a031af: Status 404 returned error can't find the container with id 4aa7c4808cdf6c275d8c85430ff31a2349d51e0f4487153ec526dad052a031af Nov 29 07:17:48 crc kubenswrapper[4660]: W1129 07:17:48.122176 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fc6ac1_6b93_4e45_a741_9df933ea2d11.slice/crio-4c18012efb79ce11e47dd13696bd1c1ce3796dd5e57ce09814bf7c9ce3bc0696 WatchSource:0}: Error finding container 4c18012efb79ce11e47dd13696bd1c1ce3796dd5e57ce09814bf7c9ce3bc0696: Status 404 returned error can't find the container with id 4c18012efb79ce11e47dd13696bd1c1ce3796dd5e57ce09814bf7c9ce3bc0696 Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.124557 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-kpp2s" podStartSLOduration=136.1245321 podStartE2EDuration="2m16.1245321s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.09911412 +0000 UTC m=+158.652644029" watchObservedRunningTime="2025-11-29 07:17:48.1245321 +0000 UTC m=+158.678061999" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.130303 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.165729 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.665713075 +0000 UTC m=+159.219242974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.174767 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj"] Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.186989 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb"] Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.189544 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" podStartSLOduration=135.189523541 podStartE2EDuration="2m15.189523541s" podCreationTimestamp="2025-11-29 07:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.165586311 +0000 UTC m=+158.719116210" watchObservedRunningTime="2025-11-29 07:17:48.189523541 +0000 UTC m=+158.743053440" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.213240 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fk72t" podStartSLOduration=136.213227735 podStartE2EDuration="2m16.213227735s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.212463654 +0000 UTC m=+158.765993553" watchObservedRunningTime="2025-11-29 07:17:48.213227735 +0000 UTC m=+158.766757634" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.244483 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.244679 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.744658512 +0000 UTC m=+159.298188411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.244958 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.245210 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2"] Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.245265 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.745254478 +0000 UTC m=+159.298784377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.293876 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tm95v" event={"ID":"2002833a-4b23-4192-83d1-dd00a412504e","Type":"ContainerStarted","Data":"016ab1ba2ab4c3689214b57c6cd4db88244c02ddd3862b243f199ef89bd1707f"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.300007 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7j5ts" podStartSLOduration=136.299989986 podStartE2EDuration="2m16.299989986s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.298952828 +0000 UTC m=+158.852482727" watchObservedRunningTime="2025-11-29 07:17:48.299989986 +0000 UTC m=+158.853519885" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.300096 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crwdj" podStartSLOduration=136.300092339 podStartE2EDuration="2m16.300092339s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.260879398 +0000 UTC m=+158.814409287" watchObservedRunningTime="2025-11-29 07:17:48.300092339 +0000 UTC m=+158.853622238" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.351775 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.352116 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.852093812 +0000 UTC m=+159.405623711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.362449 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xfw5p" podStartSLOduration=136.362428308 podStartE2EDuration="2m16.362428308s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.354943401 +0000 UTC m=+158.908473300" watchObservedRunningTime="2025-11-29 07:17:48.362428308 +0000 UTC m=+158.915958207" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.423321 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tm95v" podStartSLOduration=5.423296755 podStartE2EDuration="5.423296755s" podCreationTimestamp="2025-11-29 07:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.418718389 +0000 UTC m=+158.972248288" watchObservedRunningTime="2025-11-29 07:17:48.423296755 +0000 UTC m=+158.976826654" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.460707 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp"] Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.462718 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" event={"ID":"058d4627-e6bf-4ce0-a769-846ddc9b6687","Type":"ContainerStarted","Data":"284681b2b9285aef5db051e4476fade91a94b9646f019e20e527e2407d8aaf5d"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.464083 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.464362 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:48.964352307 +0000 UTC m=+159.517882206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.481041 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" event={"ID":"74fd06c4-6eb8-4056-ba52-e1260a0d4058","Type":"ContainerStarted","Data":"170b7e8f62d9ac8bb31cf797c742f8406c02b4f8699fea8ce13d6b9401a76a21"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.519507 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" event={"ID":"de70bbd5-2757-4733-9617-51928ad8c363","Type":"ContainerStarted","Data":"75b708fce357b72528ca0b80e0f6cac2345f0b19f24a4e0765283ac5668d987b"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.521336 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" event={"ID":"6a035a3a-155a-4b6e-ac5c-ca7118e1443d","Type":"ContainerStarted","Data":"76a3c21f6b0fe7e96fff7b1d69e05c705f9d889ea4cb1d6e811f0763be419460"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.522009 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" event={"ID":"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026","Type":"ContainerStarted","Data":"4dbf642e49fa3b73aac11ef681d23e3cf7f8286ec81df712915646cdc009604a"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.522592 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" event={"ID":"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b","Type":"ContainerStarted","Data":"4aa7c4808cdf6c275d8c85430ff31a2349d51e0f4487153ec526dad052a031af"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.523208 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" event={"ID":"a6fc6ac1-6b93-4e45-a741-9df933ea2d11","Type":"ContainerStarted","Data":"4c18012efb79ce11e47dd13696bd1c1ce3796dd5e57ce09814bf7c9ce3bc0696"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.523806 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" event={"ID":"7a202942-1c6b-4ae3-abd2-acfedf5c76a9","Type":"ContainerStarted","Data":"54b4c51a794954a5e6ef021c892f0c1e599e3580c15b7080216ce27576c8e04c"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.524367 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" event={"ID":"1eca38a1-1f85-4651-93d7-d6fa8294920a","Type":"ContainerStarted","Data":"2e2a14a528828b0ace85c0a4050bfe12bac0e53837536d56314e7b58fd12a9fc"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.540405 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x466n" podStartSLOduration=136.540390683 podStartE2EDuration="2m16.540390683s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.539312943 +0000 UTC m=+159.092842842" watchObservedRunningTime="2025-11-29 07:17:48.540390683 +0000 UTC m=+159.093920582" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.562436 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" event={"ID":"cf0decbd-7060-4501-b70c-88462984d70c","Type":"ContainerStarted","Data":"53156a0f45bb3019be25248a2067bd7cbfa11de064667a7d0705651976efb7d3"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.566542 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v727f"] Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.566789 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.572924 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.072865958 +0000 UTC m=+159.626395947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.588450 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" event={"ID":"210c4d43-9381-4d14-a0df-dfaa770fc67c","Type":"ContainerStarted","Data":"807d7c0e3ffeae02b468a44b1383bf489d3fcef1e8dc4bf46a06b7e2b49736e0"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.593603 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" event={"ID":"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe","Type":"ContainerStarted","Data":"1c8f2dc7c8e444dbf3c26dfcd307325859d22a924ee19bf3b0e2e52211f1b868"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.618209 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" event={"ID":"a2998d6f-01b6-4b4a-a5ca-44412d764e16","Type":"ContainerStarted","Data":"db2304b20c99db7f6e412c1bc017a63fb0e74f59159ab16475de000b89bc6a2e"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.618428 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.636077 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rbqps" event={"ID":"8278af76-59f6-440c-a724-ee73498ea89f","Type":"ContainerStarted","Data":"88850c6b3db3618181fb0796d8d34e42186f6fb846de57e6bb89a0791a940bd3"} Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.637872 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.637999 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kpp2s" podUID="6fdca584-ca4e-44ea-b149-bf27b1896eca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.648662 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" podStartSLOduration=136.648646866 podStartE2EDuration="2m16.648646866s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.645275234 +0000 UTC m=+159.198805133" watchObservedRunningTime="2025-11-29 07:17:48.648646866 +0000 UTC m=+159.202176765" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.676300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.680442 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.180424242 +0000 UTC m=+159.733954241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.777440 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.777684 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.277654162 +0000 UTC m=+159.831184061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.777710 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.778047 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.278036943 +0000 UTC m=+159.831566842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.778099 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.798869 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:48 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:48 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:48 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.798932 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.882214 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.882861 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.38280112 +0000 UTC m=+159.936331029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:48 crc kubenswrapper[4660]: W1129 07:17:48.909012 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f44f78_e884_4407_872e_ca5d29e061e9.slice/crio-7b7f4e249f7f6fe5e402bdab53de489ac03edbb36007dca78e4d88e70a0f5d6a WatchSource:0}: Error finding container 7b7f4e249f7f6fe5e402bdab53de489ac03edbb36007dca78e4d88e70a0f5d6a: Status 404 returned error can't find the container with id 7b7f4e249f7f6fe5e402bdab53de489ac03edbb36007dca78e4d88e70a0f5d6a Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.910071 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rbqps" podStartSLOduration=136.910052911 podStartE2EDuration="2m16.910052911s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:48.665957474 +0000 UTC m=+159.219487373" watchObservedRunningTime="2025-11-29 07:17:48.910052911 +0000 UTC m=+159.463582810" Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.918219 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sggfx"] Nov 29 07:17:48 crc kubenswrapper[4660]: I1129 07:17:48.984073 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:48 crc kubenswrapper[4660]: E1129 07:17:48.984541 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.484527174 +0000 UTC m=+160.038057073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.085366 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.085724 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.585704902 +0000 UTC m=+160.139234801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.190587 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.190926 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.690915753 +0000 UTC m=+160.244445652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.291793 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.291927 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.791907886 +0000 UTC m=+160.345437785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.292238 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.292506 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.792497712 +0000 UTC m=+160.346027611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.394103 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.394360 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.89434563 +0000 UTC m=+160.447875529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.495005 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.495288 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:49.995277062 +0000 UTC m=+160.548806961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.595630 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.595905 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.095892045 +0000 UTC m=+160.649421944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.652250 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" event={"ID":"1eca38a1-1f85-4651-93d7-d6fa8294920a","Type":"ContainerStarted","Data":"79ecd30a9004916482e0bc02463ee6feaea3c1fe71696069d3b55c29cb70db50"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.671543 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sggfx" event={"ID":"d053433c-aa27-47d1-81f8-03595088a40f","Type":"ContainerStarted","Data":"db4b8a75bc74beb33daf98145fafb45c1410a2d31d3fd926db85a798f8fb56af"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.711373 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.711962 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.211950634 +0000 UTC m=+160.765480523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.733126 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" event={"ID":"53865d66-0a9f-48e8-aef3-c487db9538f2","Type":"ContainerStarted","Data":"cba82a83c096d36d4867f1ca4a2470e9e4f9107aa43ec94b4b3468d77b125d62"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.740094 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" event={"ID":"394a349c-92b8-437a-910c-013d3da3b144","Type":"ContainerStarted","Data":"be7f7e8d65c07eaddeb200bae84e4bbb112d9110f6c2d0361cb1813fc0e64a47"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.768739 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" event={"ID":"6c8182cd-5593-4989-a633-74f2115ed6b5","Type":"ContainerStarted","Data":"cbb78120be3e98a1a46d08b655d0ea7bee94ad090627dedf25ab4f92182d66de"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.787109 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" event={"ID":"db4d7bbe-a50a-49e9-aaa0-6c7f7ffaf026","Type":"ContainerStarted","Data":"a8d0bafc8f68ed03d0ed21e69143afc00bd882c2a326415e7a00ca10543caba0"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.798399 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:49 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:49 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:49 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.798458 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.811834 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" event={"ID":"119beed4-7907-454f-99fc-5a3fc04f7484","Type":"ContainerStarted","Data":"1acdfe95999dd83cd518b759a4fd914888eb15177bd23bd2cceb1411aa1f5d70"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.812958 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.815122 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.315095296 +0000 UTC m=+160.868625225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.822276 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" event={"ID":"ba155bca-f84f-4349-9384-03d3fcdb8de0","Type":"ContainerStarted","Data":"eab6f8704673ea1d87349a9c4b77cacb50d023bb615986c3e931d578232b368b"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.839941 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" event={"ID":"ced04116-5acd-4171-934a-5a92cbd8a4aa","Type":"ContainerStarted","Data":"c6aec474ae4d9234bc6e9ecc646d1623356d7c1395239c08f328fc9a850c772c"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.862530 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" event={"ID":"33653b7e-b48e-447a-84ed-a21dc8b827ac","Type":"ContainerStarted","Data":"29933b1261f10117888943eea39952ba28e087df0d91e85b7dde74bedbae8939"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.880180 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v727f" event={"ID":"a6f44f78-e884-4407-872e-ca5d29e061e9","Type":"ContainerStarted","Data":"7b7f4e249f7f6fe5e402bdab53de489ac03edbb36007dca78e4d88e70a0f5d6a"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.925397 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:49 crc kubenswrapper[4660]: E1129 07:17:49.926870 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.426858467 +0000 UTC m=+160.980388366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.958148 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" event={"ID":"5c3c5a57-1a1a-4d53-a68f-f74dd194382e","Type":"ContainerStarted","Data":"3725be088373b969524b5fc1bbda944fa36813170da769a3fbea3c0919033f86"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.966123 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" event={"ID":"1a2b044c-0f98-459f-99d3-e836134cf09b","Type":"ContainerStarted","Data":"fb552c27cc20d7bdd70d086407e8e068b3d85bff8f97934e2b594fe3f212e1f7"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.974522 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" event={"ID":"de70bbd5-2757-4733-9617-51928ad8c363","Type":"ContainerStarted","Data":"2696b7fe279563e3507faffb192b81cca431eed910025a60c2d6659b48949ebf"} Nov 29 07:17:49 crc kubenswrapper[4660]: I1129 07:17:49.983435 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g2lfb" event={"ID":"1c512614-b5a1-47e5-8779-cc31e225150c","Type":"ContainerStarted","Data":"d6c1aae585d3037a611192d3f3bb6790b2c7681fd0ac9cd977502c2f6422f46c"} Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.003902 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" event={"ID":"543f3390-f981-4d07-bbaa-2139dd4eb2e2","Type":"ContainerStarted","Data":"d375f19de3c3de913ff8701ef0cf1ce1703e077927604cc08ef9e7a7d6c4331d"} Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.027579 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.028350 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.528331124 +0000 UTC m=+161.081861023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.028818 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.030787 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.530777261 +0000 UTC m=+161.084307160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.131773 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.132633 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.632598518 +0000 UTC m=+161.186128417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.236130 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.236532 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.736517662 +0000 UTC m=+161.290047561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.337013 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.337263 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.837248838 +0000 UTC m=+161.390778737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.398535 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zzkzd" podStartSLOduration=138.398520417 podStartE2EDuration="2m18.398520417s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:50.365517627 +0000 UTC m=+160.919047536" watchObservedRunningTime="2025-11-29 07:17:50.398520417 +0000 UTC m=+160.952050316" Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.400385 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5bf2g" podStartSLOduration=138.400371558 podStartE2EDuration="2m18.400371558s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:50.39716299 +0000 UTC m=+160.950692889" watchObservedRunningTime="2025-11-29 07:17:50.400371558 +0000 UTC m=+160.953901457" Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.435219 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-g2lfb" podStartSLOduration=7.435201138 podStartE2EDuration="7.435201138s" podCreationTimestamp="2025-11-29 07:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:50.431762104 +0000 UTC m=+160.985292003" watchObservedRunningTime="2025-11-29 07:17:50.435201138 +0000 UTC m=+160.988731037" Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.438406 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.438738 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:50.938725226 +0000 UTC m=+161.492255125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.460454 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gvr6s" podStartSLOduration=138.460434043 podStartE2EDuration="2m18.460434043s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:50.458348026 +0000 UTC m=+161.011877925" watchObservedRunningTime="2025-11-29 07:17:50.460434043 +0000 UTC m=+161.013963942" Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.539206 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.539560 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.039542114 +0000 UTC m=+161.593072013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.640203 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.640514 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.140497547 +0000 UTC m=+161.694027446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.741589 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.741693 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.241678225 +0000 UTC m=+161.795208124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.741938 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.742221 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.24221226 +0000 UTC m=+161.795742159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.776998 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:50 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:50 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:50 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.777066 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.842893 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.843030 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.343005368 +0000 UTC m=+161.896535267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.843168 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.843557 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.343545093 +0000 UTC m=+161.897074992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:50 crc kubenswrapper[4660]: I1129 07:17:50.944573 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:50 crc kubenswrapper[4660]: E1129 07:17:50.944941 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.444910436 +0000 UTC m=+161.998440335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.005374 4660 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bhg29 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.005444 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" podUID="a2998d6f-01b6-4b4a-a5ca-44412d764e16" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.010100 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" event={"ID":"58979674-f1a9-45e9-9dbe-83b07b421682","Type":"ContainerStarted","Data":"7aeff1028ae461240c6b0001248917260018070c7613f1ceef11b8b01723adc9"} Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.011338 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" event={"ID":"a6fc6ac1-6b93-4e45-a741-9df933ea2d11","Type":"ContainerStarted","Data":"be5b80cce4694e3963deae8e3cf014ade8a6b1bd30b6a14623e5f79aa8c7c49c"} Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.012659 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" event={"ID":"210c4d43-9381-4d14-a0df-dfaa770fc67c","Type":"ContainerStarted","Data":"080075a724a8013e117a54c3e0f0969b5d7a4447a122c118ffe6dd8242dd2179"} Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.033770 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" podStartSLOduration=139.033754726 podStartE2EDuration="2m19.033754726s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:51.032127591 +0000 UTC m=+161.585657500" watchObservedRunningTime="2025-11-29 07:17:51.033754726 +0000 UTC m=+161.587284625" Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.045489 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jf77g" podStartSLOduration=139.045473608 podStartE2EDuration="2m19.045473608s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:51.045211691 +0000 UTC m=+161.598741590" watchObservedRunningTime="2025-11-29 07:17:51.045473608 +0000 UTC m=+161.599003507" Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.045840 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.047397 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.54738301 +0000 UTC m=+162.100913019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.146994 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.147221 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.647188492 +0000 UTC m=+162.200718401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.147306 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.147601 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.647589163 +0000 UTC m=+162.201119062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.248086 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.248180 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.748164905 +0000 UTC m=+162.301694804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.248337 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.248599 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.748592447 +0000 UTC m=+162.302122346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.350249 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.350352 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.850335891 +0000 UTC m=+162.403865790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.350639 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.350900 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.850893736 +0000 UTC m=+162.404423635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.454712 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.454800 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.954782669 +0000 UTC m=+162.508312568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.454999 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.455230 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:51.955221092 +0000 UTC m=+162.508750991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.555904 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.556141 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.056117183 +0000 UTC m=+162.609647072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.556436 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.556872 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.056852203 +0000 UTC m=+162.610382102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.657938 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.658206 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.158174856 +0000 UTC m=+162.711704755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.759336 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.759666 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.259655612 +0000 UTC m=+162.813185511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.780161 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:51 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:51 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:51 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.780228 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.860377 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.860591 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.360565473 +0000 UTC m=+162.914095372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.860683 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.861031 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.361017317 +0000 UTC m=+162.914547206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:51 crc kubenswrapper[4660]: I1129 07:17:51.961656 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:51 crc kubenswrapper[4660]: E1129 07:17:51.961920 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.461905527 +0000 UTC m=+163.015435426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.030145 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" event={"ID":"cf0decbd-7060-4501-b70c-88462984d70c","Type":"ContainerStarted","Data":"53f03e2a420cd9ef382c57c3be0304037fb2965f430e73703b06a2863b91de30"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.055142 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" event={"ID":"5c3c5a57-1a1a-4d53-a68f-f74dd194382e","Type":"ContainerStarted","Data":"fba0a6ce3a9d07bdb6a014730ab157b4442ce0fe1e6090dbc699464ef06163db"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.063468 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.063789 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.563776084 +0000 UTC m=+163.117305983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.066972 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" event={"ID":"ba155bca-f84f-4349-9384-03d3fcdb8de0","Type":"ContainerStarted","Data":"c877eabc7d76f1c96b513ac8645a21d238fbdcdd2feb36eb374f5e91a4f7dde9"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.075130 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" event={"ID":"7a202942-1c6b-4ae3-abd2-acfedf5c76a9","Type":"ContainerStarted","Data":"3fc43db8e7f1a7c85fb2848134c5d64b61075a349c4c609d903558f3c72bb782"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.076669 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" event={"ID":"543f3390-f981-4d07-bbaa-2139dd4eb2e2","Type":"ContainerStarted","Data":"b454a0999311a57a63e3d1ba62eb748799e000352ad0cfb13be11756fd45b216"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.083926 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" event={"ID":"1a2b044c-0f98-459f-99d3-e836134cf09b","Type":"ContainerStarted","Data":"8371d557b476d5157f6e936b45c44e9596ea69963ce3bed76e17afd63fc882c6"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.085757 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" event={"ID":"5f6c877c-fa26-422c-8ddc-3b8c2bd633fe","Type":"ContainerStarted","Data":"99133f226ea954c5f9241d6d0e253af6a3d87a47ea786de6b1bb6b3bbb0395e9"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.086937 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" event={"ID":"6a035a3a-155a-4b6e-ac5c-ca7118e1443d","Type":"ContainerStarted","Data":"5ba20059163302436888816182924c592bbc1e8fa9b9903e8c93c9dc7eed2117"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.088064 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" event={"ID":"fb3f36ef-02fd-4a96-90c6-7d2f75d15a3b","Type":"ContainerStarted","Data":"ff51377507e5925c69d0627775a0742ea944df6f9fbec87199e1dbdfd2626b27"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.094439 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" event={"ID":"e9330cc5-8397-4c11-9ba6-764f28128d7b","Type":"ContainerStarted","Data":"af15fc46613f1dce8c56a32955f585e670c9872357da2bf65784577466462881"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.097444 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" event={"ID":"74fd06c4-6eb8-4056-ba52-e1260a0d4058","Type":"ContainerStarted","Data":"2bebb1b480c679df46386778a530c4125916fd0a57c9dc8d58752ea533d27abb"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.099562 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" event={"ID":"53865d66-0a9f-48e8-aef3-c487db9538f2","Type":"ContainerStarted","Data":"13045e2f9fc3b9d272b5d9eec18bb1533969b69017a4a8af018f77a6d4cdbc7a"} Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.164877 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.165057 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.665029576 +0000 UTC m=+163.218559475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.165124 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.165459 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.665444987 +0000 UTC m=+163.218974886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.266090 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.266271 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.766237635 +0000 UTC m=+163.319767534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.266337 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.266658 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.766644926 +0000 UTC m=+163.320174825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.367776 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.367968 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.867943938 +0000 UTC m=+163.421473837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.368074 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.368395 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.86838617 +0000 UTC m=+163.421916069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.469582 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.469811 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.969778174 +0000 UTC m=+163.523308073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.469919 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.470274 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:52.970259258 +0000 UTC m=+163.523789157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.570774 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.571141 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.071123728 +0000 UTC m=+163.624653627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.672514 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.672859 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.172842931 +0000 UTC m=+163.726372830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.773414 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.773588 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.273562757 +0000 UTC m=+163.827092656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.773857 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.774266 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.274244917 +0000 UTC m=+163.827774816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.777501 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:52 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:52 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:52 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.777560 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.838878 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.874761 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.874934 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.374906111 +0000 UTC m=+163.928436010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.875147 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.875478 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.375465456 +0000 UTC m=+163.928995355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:52 crc kubenswrapper[4660]: I1129 07:17:52.976709 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:52 crc kubenswrapper[4660]: E1129 07:17:52.977166 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.477125379 +0000 UTC m=+164.030655278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.078933 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.079493 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.579464629 +0000 UTC m=+164.132994568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.105879 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v727f" event={"ID":"a6f44f78-e884-4407-872e-ca5d29e061e9","Type":"ContainerStarted","Data":"3565cfd21b1ed476d2724210ef8d3e3944ff37957615b6a52c9fefa4d9199a5e"} Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.108011 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" event={"ID":"119beed4-7907-454f-99fc-5a3fc04f7484","Type":"ContainerStarted","Data":"46cb1518f932bccf4ebc501a02d5d48430712f5f8432f484525f38eafb8fdbbb"} Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.110146 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" event={"ID":"210c4d43-9381-4d14-a0df-dfaa770fc67c","Type":"ContainerStarted","Data":"57c62b2555c00da005194a9fead5b8e4d4668552098e0d6d046928a7f4d3bcf0"} Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.112358 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sggfx" event={"ID":"d053433c-aa27-47d1-81f8-03595088a40f","Type":"ContainerStarted","Data":"69c54c7948bd8f3a7a176bb75d1b62456162a91c043752ae3940a1547cd9b14f"} Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.115383 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" event={"ID":"de70bbd5-2757-4733-9617-51928ad8c363","Type":"ContainerStarted","Data":"b22e8f45106d8453e10198ae9aa13304614dafb8fa7574f7200629886584d032"} Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.119113 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" event={"ID":"33653b7e-b48e-447a-84ed-a21dc8b827ac","Type":"ContainerStarted","Data":"dc083c2c24b09ef997dade44c7f5e3df4e856358b46b5f6ab3478aff83b417f3"} Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.165575 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podStartSLOduration=141.165550851 podStartE2EDuration="2m21.165550851s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.14047362 +0000 UTC m=+163.694003529" watchObservedRunningTime="2025-11-29 07:17:53.165550851 +0000 UTC m=+163.719080760" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.167222 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-znn4f" podStartSLOduration=141.167208187 podStartE2EDuration="2m21.167208187s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.157249192 +0000 UTC m=+163.710779091" watchObservedRunningTime="2025-11-29 07:17:53.167208187 +0000 UTC m=+163.720738086" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.179777 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.179883 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.679861037 +0000 UTC m=+164.233390936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.179988 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.180374 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.68035498 +0000 UTC m=+164.233884879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.186348 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xq9vp" podStartSLOduration=141.186326094 podStartE2EDuration="2m21.186326094s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.185177002 +0000 UTC m=+163.738706901" watchObservedRunningTime="2025-11-29 07:17:53.186326094 +0000 UTC m=+163.739855993" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.269377 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" podStartSLOduration=141.269354912 podStartE2EDuration="2m21.269354912s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.241574918 +0000 UTC m=+163.795104817" watchObservedRunningTime="2025-11-29 07:17:53.269354912 +0000 UTC m=+163.822884811" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.296196 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.296583 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.796562613 +0000 UTC m=+164.350092512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.303383 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stqbv" podStartSLOduration=141.30336613 podStartE2EDuration="2m21.30336613s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.301116818 +0000 UTC m=+163.854646717" watchObservedRunningTime="2025-11-29 07:17:53.30336613 +0000 UTC m=+163.856896029" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.343766 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5nhqp" podStartSLOduration=140.343746824 podStartE2EDuration="2m20.343746824s" podCreationTimestamp="2025-11-29 07:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.341315666 +0000 UTC m=+163.894845575" watchObservedRunningTime="2025-11-29 07:17:53.343746824 +0000 UTC m=+163.897276723" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.371076 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s945x" podStartSLOduration=141.371060156 podStartE2EDuration="2m21.371060156s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.367676853 +0000 UTC m=+163.921206752" watchObservedRunningTime="2025-11-29 07:17:53.371060156 +0000 UTC m=+163.924590055" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.399299 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.399597 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:53.899585522 +0000 UTC m=+164.453115421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.426340 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" podStartSLOduration=141.42632439 podStartE2EDuration="2m21.42632439s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.422652578 +0000 UTC m=+163.976182477" watchObservedRunningTime="2025-11-29 07:17:53.42632439 +0000 UTC m=+163.979854289" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.437766 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" podStartSLOduration=141.437748794 podStartE2EDuration="2m21.437748794s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.437524638 +0000 UTC m=+163.991054537" watchObservedRunningTime="2025-11-29 07:17:53.437748794 +0000 UTC m=+163.991278683" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.468273 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" podStartSLOduration=141.468256664 podStartE2EDuration="2m21.468256664s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.461845648 +0000 UTC m=+164.015375547" watchObservedRunningTime="2025-11-29 07:17:53.468256664 +0000 UTC m=+164.021786563" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.496331 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-85vhc" podStartSLOduration=141.496315889 podStartE2EDuration="2m21.496315889s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:53.495467685 +0000 UTC m=+164.048997584" watchObservedRunningTime="2025-11-29 07:17:53.496315889 +0000 UTC m=+164.049845788" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.500497 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.500896 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.000879534 +0000 UTC m=+164.554409433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.602044 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.602395 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.102377181 +0000 UTC m=+164.655907080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.702603 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.702748 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.202723317 +0000 UTC m=+164.756253216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.702789 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.703075 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.203064286 +0000 UTC m=+164.756594185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.780541 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:53 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:53 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:53 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.780646 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.804036 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.804590 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.304363019 +0000 UTC m=+164.857892918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:53 crc kubenswrapper[4660]: I1129 07:17:53.905114 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:53 crc kubenswrapper[4660]: E1129 07:17:53.905660 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.40564743 +0000 UTC m=+164.959177329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.006236 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.006391 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.506366226 +0000 UTC m=+165.059896125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.006631 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.006939 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.506927621 +0000 UTC m=+165.060457520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.107964 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.108134 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.60810771 +0000 UTC m=+165.161637619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.108272 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.108586 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.608573273 +0000 UTC m=+165.162103172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.126177 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" event={"ID":"543f3390-f981-4d07-bbaa-2139dd4eb2e2","Type":"ContainerStarted","Data":"8228cdc35d61691c63b606d1b3339914535ea06dfe897b4ab6893e292ffe7903"} Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.128165 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sggfx" event={"ID":"d053433c-aa27-47d1-81f8-03595088a40f","Type":"ContainerStarted","Data":"dd797e7312329036564278847c73cb13ea34cee3e23dbae95741bc12ed711a06"} Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.128282 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sggfx" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.129996 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" event={"ID":"53865d66-0a9f-48e8-aef3-c487db9538f2","Type":"ContainerStarted","Data":"e35017e0b23406054e87b4204cbd4ae845dad3ec5167a3bee6a0a584b79c47db"} Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.131766 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v727f" event={"ID":"a6f44f78-e884-4407-872e-ca5d29e061e9","Type":"ContainerStarted","Data":"54d6ef0f316fed8360ef3bd307357ea6e471bfe3e31e87e8a017d59de2e9933f"} Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.209049 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.209157 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.709140805 +0000 UTC m=+165.262670694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.211728 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.212421 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.712410855 +0000 UTC m=+165.265940754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.292671 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" podStartSLOduration=142.292652927 podStartE2EDuration="2m22.292652927s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:54.289097389 +0000 UTC m=+164.842627318" watchObservedRunningTime="2025-11-29 07:17:54.292652927 +0000 UTC m=+164.846182826" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.293789 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wphsb" podStartSLOduration=142.293782268 podStartE2EDuration="2m22.293782268s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:54.210252825 +0000 UTC m=+164.763782724" watchObservedRunningTime="2025-11-29 07:17:54.293782268 +0000 UTC m=+164.847312167" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.304354 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.304395 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.312942 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.313040 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.813024188 +0000 UTC m=+165.366554077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.313305 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.313569 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.813562483 +0000 UTC m=+165.367092382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.331857 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.332603 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.400330 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp2s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.400381 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kpp2s" podUID="6fdca584-ca4e-44ea-b149-bf27b1896eca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.400567 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.400601 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.400636 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.400629 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kpp2s" podUID="6fdca584-ca4e-44ea-b149-bf27b1896eca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.402037 4660 patch_prober.go:28] interesting pod/console-f9d7485db-8qjn8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.402070 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8qjn8" podUID="f46e1d0c-84fc-4518-9101-a64174cee99a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.415333 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.415503 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.915479952 +0000 UTC m=+165.469009851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.415719 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.415990 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:54.915977255 +0000 UTC m=+165.469507154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.457336 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-st7q5" podStartSLOduration=142.457319465 podStartE2EDuration="2m22.457319465s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:54.36643144 +0000 UTC m=+164.919961349" watchObservedRunningTime="2025-11-29 07:17:54.457319465 +0000 UTC m=+165.010849364" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.458475 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hwmtw" podStartSLOduration=142.458468737 podStartE2EDuration="2m22.458468737s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:54.448902843 +0000 UTC m=+165.002432742" watchObservedRunningTime="2025-11-29 07:17:54.458468737 +0000 UTC m=+165.011998636" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.515735 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mktzj" podStartSLOduration=142.515715425 podStartE2EDuration="2m22.515715425s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:54.515284703 +0000 UTC m=+165.068814602" watchObservedRunningTime="2025-11-29 07:17:54.515715425 +0000 UTC m=+165.069245324" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.516642 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.516792 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.016769373 +0000 UTC m=+165.570299272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.516866 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.517166 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.017158295 +0000 UTC m=+165.570688194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.565464 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sggfx" podStartSLOduration=11.565444425 podStartE2EDuration="11.565444425s" podCreationTimestamp="2025-11-29 07:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:54.564290223 +0000 UTC m=+165.117820122" watchObservedRunningTime="2025-11-29 07:17:54.565444425 +0000 UTC m=+165.118974324" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.598369 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" podStartSLOduration=142.598350793 podStartE2EDuration="2m22.598350793s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:54.587042181 +0000 UTC m=+165.140572070" watchObservedRunningTime="2025-11-29 07:17:54.598350793 +0000 UTC m=+165.151880692" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.617625 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.617771 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.117751717 +0000 UTC m=+165.671281616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.617797 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.618077 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.118070056 +0000 UTC m=+165.671599945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.656966 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vw7mz"] Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.661270 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.661724 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.670958 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.716068 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vw7mz"] Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.718365 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.718485 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-utilities\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.718519 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr596\" (UniqueName: \"kubernetes.io/projected/2071aaa8-38a7-47d8-bf67-b3862af09221-kube-api-access-jr596\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.718602 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-catalog-content\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.718697 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.218682809 +0000 UTC m=+165.772212708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.779672 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:54 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:54 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:54 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.779744 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.819870 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-catalog-content\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.820120 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-utilities\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.820146 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr596\" (UniqueName: \"kubernetes.io/projected/2071aaa8-38a7-47d8-bf67-b3862af09221-kube-api-access-jr596\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.820171 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.820440 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.320428283 +0000 UTC m=+165.873958182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.820562 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-catalog-content\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.820833 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-utilities\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.826303 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mdtvz"] Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.827456 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.829974 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.850791 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mdtvz"] Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.865963 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr596\" (UniqueName: \"kubernetes.io/projected/2071aaa8-38a7-47d8-bf67-b3862af09221-kube-api-access-jr596\") pod \"certified-operators-vw7mz\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.921404 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.921633 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkv59\" (UniqueName: \"kubernetes.io/projected/b38d4bdc-266e-423b-89e8-4bea085d5ce7-kube-api-access-hkv59\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.921655 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-utilities\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.921670 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-catalog-content\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:54 crc kubenswrapper[4660]: E1129 07:17:54.921781 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.421766937 +0000 UTC m=+165.975296836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:54 crc kubenswrapper[4660]: I1129 07:17:54.984091 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.022902 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.022957 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkv59\" (UniqueName: \"kubernetes.io/projected/b38d4bdc-266e-423b-89e8-4bea085d5ce7-kube-api-access-hkv59\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.022976 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-utilities\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.022990 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-catalog-content\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.023407 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-catalog-content\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.023761 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.523746437 +0000 UTC m=+166.077276336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.024848 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-utilities\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.037363 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-swwtq"] Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.038312 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.052132 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkv59\" (UniqueName: \"kubernetes.io/projected/b38d4bdc-266e-423b-89e8-4bea085d5ce7-kube-api-access-hkv59\") pod \"community-operators-mdtvz\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.083889 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swwtq"] Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.123388 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.123524 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxjr\" (UniqueName: \"kubernetes.io/projected/07c2303f-89f5-4280-8830-05e28e5a1d96-kube-api-access-rnxjr\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.123554 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-utilities\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.123604 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-catalog-content\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.123708 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.623693972 +0000 UTC m=+166.177223871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.141172 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.184720 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" event={"ID":"58979674-f1a9-45e9-9dbe-83b07b421682","Type":"ContainerStarted","Data":"db2498cbc92a43a1983cfa5e602a688c584414aefe302950754a19a8c002a442"} Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.202108 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rjxhb" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.210166 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-v727f" podStartSLOduration=143.210149905 podStartE2EDuration="2m23.210149905s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:17:55.20853694 +0000 UTC m=+165.762066839" watchObservedRunningTime="2025-11-29 07:17:55.210149905 +0000 UTC m=+165.763679804" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.224425 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-catalog-content\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.224791 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxjr\" (UniqueName: \"kubernetes.io/projected/07c2303f-89f5-4280-8830-05e28e5a1d96-kube-api-access-rnxjr\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.224845 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.224869 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-utilities\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.224919 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-catalog-content\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.226222 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.726210107 +0000 UTC m=+166.279740006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.227908 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-utilities\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.237928 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m5w6w"] Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.246823 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.290461 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxjr\" (UniqueName: \"kubernetes.io/projected/07c2303f-89f5-4280-8830-05e28e5a1d96-kube-api-access-rnxjr\") pod \"certified-operators-swwtq\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.299813 4660 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tmccw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]log ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]etcd ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/generic-apiserver-start-informers ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/max-in-flight-filter ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 29 07:17:55 crc kubenswrapper[4660]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 29 07:17:55 crc kubenswrapper[4660]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/project.openshift.io-projectcache ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-startinformers ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 29 07:17:55 crc kubenswrapper[4660]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 29 07:17:55 crc kubenswrapper[4660]: livez check failed Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.299857 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" podUID="33653b7e-b48e-447a-84ed-a21dc8b827ac" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.325501 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5w6w"] Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.329047 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.329321 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.829307869 +0000 UTC m=+166.382837768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.376919 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.430720 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.430808 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-catalog-content\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.430832 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tpl\" (UniqueName: \"kubernetes.io/projected/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-kube-api-access-t7tpl\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.430854 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-utilities\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.431244 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:55.931220887 +0000 UTC m=+166.484750866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.534078 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.534499 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-catalog-content\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.534518 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tpl\" (UniqueName: \"kubernetes.io/projected/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-kube-api-access-t7tpl\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.534538 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-utilities\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.535066 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-utilities\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.535374 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.035358608 +0000 UTC m=+166.588888507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.535591 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-catalog-content\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.583635 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tpl\" (UniqueName: \"kubernetes.io/projected/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-kube-api-access-t7tpl\") pod \"community-operators-m5w6w\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.618873 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.639653 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.640001 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.139986792 +0000 UTC m=+166.693516691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.740345 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.740635 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.240618225 +0000 UTC m=+166.794148124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.752662 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.753254 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.761034 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.761315 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.776122 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.778181 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:55 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:55 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:55 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.778214 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.788173 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.865966 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.36594969 +0000 UTC m=+166.919479589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.868101 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.906753 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.911638 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.928834 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vw7mz"] Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.928856 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-974tz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.928907 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.928965 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-974tz container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.928996 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.929191 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-974tz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.929216 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.947795 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.953925 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.972369 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.972538 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06f1c34f-c458-4928-bdeb-f251abf5f975-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:55 crc kubenswrapper[4660]: I1129 07:17:55.972584 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06f1c34f-c458-4928-bdeb-f251abf5f975-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:55 crc kubenswrapper[4660]: E1129 07:17:55.972697 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.472681761 +0000 UTC m=+167.026211660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.012346 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lfxc4" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.012408 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jxgk" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.074153 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06f1c34f-c458-4928-bdeb-f251abf5f975-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.074243 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06f1c34f-c458-4928-bdeb-f251abf5f975-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.074281 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.076015 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06f1c34f-c458-4928-bdeb-f251abf5f975-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.076347 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.576337568 +0000 UTC m=+167.129867467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.124021 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06f1c34f-c458-4928-bdeb-f251abf5f975-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.176185 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.176642 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.676605292 +0000 UTC m=+167.230135181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.179497 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swwtq"] Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.190737 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mdtvz"] Nov 29 07:17:56 crc kubenswrapper[4660]: W1129 07:17:56.200344 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07c2303f_89f5_4280_8830_05e28e5a1d96.slice/crio-aa438f0fa92fe24d5528ff5cc54149115207a9f83cafe24ae59fc918856f6f54 WatchSource:0}: Error finding container aa438f0fa92fe24d5528ff5cc54149115207a9f83cafe24ae59fc918856f6f54: Status 404 returned error can't find the container with id aa438f0fa92fe24d5528ff5cc54149115207a9f83cafe24ae59fc918856f6f54 Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.216722 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vw7mz" event={"ID":"2071aaa8-38a7-47d8-bf67-b3862af09221","Type":"ContainerStarted","Data":"76918e7d0c4f33840bec9da79ddccc8b36e0281b33e9cb8d4d8621436f5372ea"} Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.273242 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.278889 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.279240 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.779197179 +0000 UTC m=+167.332727078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.320452 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bd2x2" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.379621 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.380636 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.880620535 +0000 UTC m=+167.434150434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.392846 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.464940 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5w6w"] Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.480919 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.481281 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:56.981270649 +0000 UTC m=+167.534800548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.582150 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.582489 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.082471109 +0000 UTC m=+167.636001008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.683663 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.684142 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.18412862 +0000 UTC m=+167.737658519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.697921 4660 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.781000 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:56 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:56 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:56 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.781245 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.786051 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.786315 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.286302566 +0000 UTC m=+167.839832475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.816427 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.845626 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tk9c4"] Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.846736 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.887744 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.888155 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.388141113 +0000 UTC m=+167.941671012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.893813 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.928850 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tk9c4"] Nov 29 07:17:56 crc kubenswrapper[4660]: W1129 07:17:56.936429 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod06f1c34f_c458_4928_bdeb_f251abf5f975.slice/crio-62b6f346a4d252a29f777dd908e5dfd6e334ff74579f66897680c7ca4c8b7584 WatchSource:0}: Error finding container 62b6f346a4d252a29f777dd908e5dfd6e334ff74579f66897680c7ca4c8b7584: Status 404 returned error can't find the container with id 62b6f346a4d252a29f777dd908e5dfd6e334ff74579f66897680c7ca4c8b7584 Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.991208 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.991366 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-utilities\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.991390 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49zb\" (UniqueName: \"kubernetes.io/projected/0787f5de-a9f4-435c-8553-fcb080d3950b-kube-api-access-z49zb\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:56 crc kubenswrapper[4660]: I1129 07:17:56.991432 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-catalog-content\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:56 crc kubenswrapper[4660]: E1129 07:17:56.991848 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.491834441 +0000 UTC m=+168.045364340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.108231 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z49zb\" (UniqueName: \"kubernetes.io/projected/0787f5de-a9f4-435c-8553-fcb080d3950b-kube-api-access-z49zb\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.108271 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.108303 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-catalog-content\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.108361 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-utilities\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.109036 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-utilities\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.109318 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-catalog-content\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:57 crc kubenswrapper[4660]: E1129 07:17:57.109748 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.609595887 +0000 UTC m=+168.163125786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.145490 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z49zb\" (UniqueName: \"kubernetes.io/projected/0787f5de-a9f4-435c-8553-fcb080d3950b-kube-api-access-z49zb\") pod \"redhat-marketplace-tk9c4\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.209114 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:57 crc kubenswrapper[4660]: E1129 07:17:57.209286 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.709260413 +0000 UTC m=+168.262790312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.209349 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:57 crc kubenswrapper[4660]: E1129 07:17:57.209689 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.709681126 +0000 UTC m=+168.263211025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44llw" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.226648 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-blz5c"] Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.227875 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.249130 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-blz5c"] Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.277007 4660 generic.go:334] "Generic (PLEG): container finished" podID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerID="a492bbd911e16635c86b02bf1fc654b76a49496bb777bd76e45edc1cea13e6f9" exitCode=0 Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.277097 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swwtq" event={"ID":"07c2303f-89f5-4280-8830-05e28e5a1d96","Type":"ContainerDied","Data":"a492bbd911e16635c86b02bf1fc654b76a49496bb777bd76e45edc1cea13e6f9"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.277127 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swwtq" event={"ID":"07c2303f-89f5-4280-8830-05e28e5a1d96","Type":"ContainerStarted","Data":"aa438f0fa92fe24d5528ff5cc54149115207a9f83cafe24ae59fc918856f6f54"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.280582 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.298537 4660 generic.go:334] "Generic (PLEG): container finished" podID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerID="d421c36b18a478c45100cd4055f8279a9eec728cf9441ee07e86faa96ce39e21" exitCode=0 Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.298632 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vw7mz" event={"ID":"2071aaa8-38a7-47d8-bf67-b3862af09221","Type":"ContainerDied","Data":"d421c36b18a478c45100cd4055f8279a9eec728cf9441ee07e86faa96ce39e21"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.312847 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.313116 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-utilities\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.313162 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-catalog-content\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.313190 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p7lw\" (UniqueName: \"kubernetes.io/projected/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-kube-api-access-6p7lw\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: E1129 07:17:57.313937 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-29 07:17:57.813920008 +0000 UTC m=+168.367449907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.314291 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06f1c34f-c458-4928-bdeb-f251abf5f975","Type":"ContainerStarted","Data":"62b6f346a4d252a29f777dd908e5dfd6e334ff74579f66897680c7ca4c8b7584"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.326989 4660 generic.go:334] "Generic (PLEG): container finished" podID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerID="90c373b115b8e84e0aafc92bd10e1404f6c8d81bf28dadeb06412630b9bbe12f" exitCode=0 Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.327084 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5w6w" event={"ID":"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2","Type":"ContainerDied","Data":"90c373b115b8e84e0aafc92bd10e1404f6c8d81bf28dadeb06412630b9bbe12f"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.327108 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5w6w" event={"ID":"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2","Type":"ContainerStarted","Data":"faaa8e9e3cd18c06f7fe65c2f599eafa5145b3500f5afb115ebe31a366090650"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.329957 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" event={"ID":"58979674-f1a9-45e9-9dbe-83b07b421682","Type":"ContainerStarted","Data":"9d516022783f3f56688d16cf3527e3fc9f3484215c0a50bb170a61dd6ad88d63"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.353334 4660 generic.go:334] "Generic (PLEG): container finished" podID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerID="169ee78f1ac0eb03427003808816545a0f680d135938bff288306480b5392f8c" exitCode=0 Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.353588 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdtvz" event={"ID":"b38d4bdc-266e-423b-89e8-4bea085d5ce7","Type":"ContainerDied","Data":"169ee78f1ac0eb03427003808816545a0f680d135938bff288306480b5392f8c"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.353629 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdtvz" event={"ID":"b38d4bdc-266e-423b-89e8-4bea085d5ce7","Type":"ContainerStarted","Data":"42d78aea31b00a4f69fefb990a5e66fe67130914d47a890a55e3d711f7982e4e"} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.367733 4660 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-29T07:17:56.697946461Z","Handler":null,"Name":""} Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.382273 4660 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.382321 4660 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.395673 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.420178 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.420212 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-utilities\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.420238 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-catalog-content\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.420257 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p7lw\" (UniqueName: \"kubernetes.io/projected/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-kube-api-access-6p7lw\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.422311 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-utilities\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.422559 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-catalog-content\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.442434 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p7lw\" (UniqueName: \"kubernetes.io/projected/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-kube-api-access-6p7lw\") pod \"redhat-marketplace-blz5c\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.557288 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.667674 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tk9c4"] Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.774188 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-blz5c"] Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.776838 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:57 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:57 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:57 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.776911 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.821688 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wd4st"] Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.822775 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.825385 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.844966 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wd4st"] Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.929333 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-utilities\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.929391 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-catalog-content\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:57 crc kubenswrapper[4660]: I1129 07:17:57.929817 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt8m\" (UniqueName: \"kubernetes.io/projected/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-kube-api-access-mrt8m\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.031317 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-utilities\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.031376 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-catalog-content\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.031414 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrt8m\" (UniqueName: \"kubernetes.io/projected/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-kube-api-access-mrt8m\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.058491 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrt8m\" (UniqueName: \"kubernetes.io/projected/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-kube-api-access-mrt8m\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.229074 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6mtm"] Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.230566 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.235347 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-catalog-content\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.235409 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tn5f\" (UniqueName: \"kubernetes.io/projected/a157b019-1b17-4d7e-8a47-868b3d24496f-kube-api-access-9tn5f\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.235578 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-utilities\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.255665 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6mtm"] Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.336533 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-catalog-content\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.336592 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tn5f\" (UniqueName: \"kubernetes.io/projected/a157b019-1b17-4d7e-8a47-868b3d24496f-kube-api-access-9tn5f\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.336694 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-utilities\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.337252 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-catalog-content\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.337282 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-utilities\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.357944 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-utilities\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.358023 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-catalog-content\") pod \"redhat-operators-wd4st\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.364885 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tn5f\" (UniqueName: \"kubernetes.io/projected/a157b019-1b17-4d7e-8a47-868b3d24496f-kube-api-access-9tn5f\") pod \"redhat-operators-h6mtm\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: W1129 07:17:58.380094 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36c3fcb8_412f_4ea8_8da8_0fb4308886d0.slice/crio-7baf811728a1629c92d6e3e952ddca86f2b854628033336434bff1bc13223453 WatchSource:0}: Error finding container 7baf811728a1629c92d6e3e952ddca86f2b854628033336434bff1bc13223453: Status 404 returned error can't find the container with id 7baf811728a1629c92d6e3e952ddca86f2b854628033336434bff1bc13223453 Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.401023 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tk9c4" event={"ID":"0787f5de-a9f4-435c-8553-fcb080d3950b","Type":"ContainerStarted","Data":"42d180d9d1ca2d959e21a52c28a7036d9382d1b9dbb9a00190482e77628598b7"} Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.436791 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.548560 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.678265 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wd4st"] Nov 29 07:17:58 crc kubenswrapper[4660]: W1129 07:17:58.691058 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc165ea6a_e592_4d7f_b35c_314fd0bf1cbf.slice/crio-4b9ad9ab1ae28a00e11286629af3117fc40e792b365b12b7be18d61ba9651ee8 WatchSource:0}: Error finding container 4b9ad9ab1ae28a00e11286629af3117fc40e792b365b12b7be18d61ba9651ee8: Status 404 returned error can't find the container with id 4b9ad9ab1ae28a00e11286629af3117fc40e792b365b12b7be18d61ba9651ee8 Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.778809 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:58 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:58 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:58 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.778879 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:17:58 crc kubenswrapper[4660]: I1129 07:17:58.793041 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6mtm"] Nov 29 07:17:58 crc kubenswrapper[4660]: W1129 07:17:58.798502 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda157b019_1b17_4d7e_8a47_868b3d24496f.slice/crio-a413cbc9782276f3f49196eb00c1b7ff21f1ff6b6a6ba1d80d47c84a8e74102a WatchSource:0}: Error finding container a413cbc9782276f3f49196eb00c1b7ff21f1ff6b6a6ba1d80d47c84a8e74102a: Status 404 returned error can't find the container with id a413cbc9782276f3f49196eb00c1b7ff21f1ff6b6a6ba1d80d47c84a8e74102a Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.037773 4660 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.037866 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.091457 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44llw\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.149657 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.158032 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.233157 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.310229 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.316221 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tmccw" Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.431150 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6mtm" event={"ID":"a157b019-1b17-4d7e-8a47-868b3d24496f","Type":"ContainerStarted","Data":"a413cbc9782276f3f49196eb00c1b7ff21f1ff6b6a6ba1d80d47c84a8e74102a"} Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.441487 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd4st" event={"ID":"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf","Type":"ContainerStarted","Data":"4b9ad9ab1ae28a00e11286629af3117fc40e792b365b12b7be18d61ba9651ee8"} Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.455016 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-blz5c" event={"ID":"36c3fcb8-412f-4ea8-8da8-0fb4308886d0","Type":"ContainerStarted","Data":"7baf811728a1629c92d6e3e952ddca86f2b854628033336434bff1bc13223453"} Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.588340 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44llw"] Nov 29 07:17:59 crc kubenswrapper[4660]: W1129 07:17:59.697420 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd038381e_2b8e_4b9d_8ca4_301d2ecefcd0.slice/crio-8fb12781113fa44f111cd1c33d906719c775930e1e9b22aeb8b34c3997013226 WatchSource:0}: Error finding container 8fb12781113fa44f111cd1c33d906719c775930e1e9b22aeb8b34c3997013226: Status 404 returned error can't find the container with id 8fb12781113fa44f111cd1c33d906719c775930e1e9b22aeb8b34c3997013226 Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.715187 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.776311 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:17:59 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:17:59 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:17:59 crc kubenswrapper[4660]: healthz check failed Nov 29 07:17:59 crc kubenswrapper[4660]: I1129 07:17:59.776354 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.462815 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" event={"ID":"58979674-f1a9-45e9-9dbe-83b07b421682","Type":"ContainerStarted","Data":"0926a4fd96e44988b299080ff4eba9f48fed7f53516518fe0e05a7474c4477b9"} Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.463658 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" event={"ID":"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0","Type":"ContainerStarted","Data":"8fb12781113fa44f111cd1c33d906719c775930e1e9b22aeb8b34c3997013226"} Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.776144 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:18:00 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:18:00 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:18:00 crc kubenswrapper[4660]: healthz check failed Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.776627 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.873302 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.874118 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.881785 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.882135 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.885326 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.919976 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:00 crc kubenswrapper[4660]: I1129 07:18:00.920291 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.022226 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.022870 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.022387 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.048457 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.198442 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.438781 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.776773 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:18:01 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:18:01 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:18:01 crc kubenswrapper[4660]: healthz check failed Nov 29 07:18:01 crc kubenswrapper[4660]: I1129 07:18:01.776838 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:02 crc kubenswrapper[4660]: I1129 07:18:02.776431 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:18:02 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:18:02 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:18:02 crc kubenswrapper[4660]: healthz check failed Nov 29 07:18:02 crc kubenswrapper[4660]: I1129 07:18:02.776495 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:03 crc kubenswrapper[4660]: E1129 07:18:03.481722 4660 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.788s" Nov 29 07:18:03 crc kubenswrapper[4660]: I1129 07:18:03.513809 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf","Type":"ContainerStarted","Data":"9508d851991f3b4a8acb94e4d2d7e0da903582ba9a11735078560ad4867c29dd"} Nov 29 07:18:03 crc kubenswrapper[4660]: I1129 07:18:03.780188 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:18:03 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:18:03 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:18:03 crc kubenswrapper[4660]: healthz check failed Nov 29 07:18:03 crc kubenswrapper[4660]: I1129 07:18:03.780381 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.400768 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp2s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.401048 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kpp2s" podUID="6fdca584-ca4e-44ea-b149-bf27b1896eca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.400831 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-kpp2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.401204 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kpp2s" podUID="6fdca584-ca4e-44ea-b149-bf27b1896eca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.401830 4660 patch_prober.go:28] interesting pod/console-f9d7485db-8qjn8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.401865 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8qjn8" podUID="f46e1d0c-84fc-4518-9101-a64174cee99a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.495771 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sggfx" Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.520259 4660 generic.go:334] "Generic (PLEG): container finished" podID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerID="392675fbb77d7416b4f50ee6c85e28556efd2e02bb8b0d51793c0ee6b3f27507" exitCode=0 Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.520432 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6mtm" event={"ID":"a157b019-1b17-4d7e-8a47-868b3d24496f","Type":"ContainerDied","Data":"392675fbb77d7416b4f50ee6c85e28556efd2e02bb8b0d51793c0ee6b3f27507"} Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.522399 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" event={"ID":"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0","Type":"ContainerStarted","Data":"4100398878d4b4dbc55fa1a57eb652af8c008137faa0512a31c34b781ce187ec"} Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.524342 4660 generic.go:334] "Generic (PLEG): container finished" podID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerID="eb66690338b478410252df123517159b0f3788ef9d4a644ba0e39eb8e9fe01f8" exitCode=0 Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.524395 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd4st" event={"ID":"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf","Type":"ContainerDied","Data":"eb66690338b478410252df123517159b0f3788ef9d4a644ba0e39eb8e9fe01f8"} Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.528279 4660 generic.go:334] "Generic (PLEG): container finished" podID="74fd06c4-6eb8-4056-ba52-e1260a0d4058" containerID="2bebb1b480c679df46386778a530c4125916fd0a57c9dc8d58752ea533d27abb" exitCode=0 Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.528331 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" event={"ID":"74fd06c4-6eb8-4056-ba52-e1260a0d4058","Type":"ContainerDied","Data":"2bebb1b480c679df46386778a530c4125916fd0a57c9dc8d58752ea533d27abb"} Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.531078 4660 generic.go:334] "Generic (PLEG): container finished" podID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerID="3ed1a1c4963b2173b8a6520c7053cf0196c633da18584f68cc266bb19f1627ea" exitCode=0 Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.531129 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-blz5c" event={"ID":"36c3fcb8-412f-4ea8-8da8-0fb4308886d0","Type":"ContainerDied","Data":"3ed1a1c4963b2173b8a6520c7053cf0196c633da18584f68cc266bb19f1627ea"} Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.532539 4660 generic.go:334] "Generic (PLEG): container finished" podID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerID="e7c8a7fef523af8f66cb71767450cdbd8b13199705fc4966a68dbb1731d2238d" exitCode=0 Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.532590 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tk9c4" event={"ID":"0787f5de-a9f4-435c-8553-fcb080d3950b","Type":"ContainerDied","Data":"e7c8a7fef523af8f66cb71767450cdbd8b13199705fc4966a68dbb1731d2238d"} Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.536485 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06f1c34f-c458-4928-bdeb-f251abf5f975","Type":"ContainerStarted","Data":"b07ab3df37b89721eaa2ead5dac6310f843162c0f4ba3d4dc61985abdd48e598"} Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.561273 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-smxsv" podStartSLOduration=21.561253897 podStartE2EDuration="21.561253897s" podCreationTimestamp="2025-11-29 07:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:18:04.560794935 +0000 UTC m=+175.114324844" watchObservedRunningTime="2025-11-29 07:18:04.561253897 +0000 UTC m=+175.114783796" Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.779298 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:18:04 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:18:04 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:18:04 crc kubenswrapper[4660]: healthz check failed Nov 29 07:18:04 crc kubenswrapper[4660]: I1129 07:18:04.779375 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.500270 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.500318 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.545362 4660 generic.go:334] "Generic (PLEG): container finished" podID="06f1c34f-c458-4928-bdeb-f251abf5f975" containerID="b07ab3df37b89721eaa2ead5dac6310f843162c0f4ba3d4dc61985abdd48e598" exitCode=0 Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.545438 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06f1c34f-c458-4928-bdeb-f251abf5f975","Type":"ContainerDied","Data":"b07ab3df37b89721eaa2ead5dac6310f843162c0f4ba3d4dc61985abdd48e598"} Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.553413 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf","Type":"ContainerStarted","Data":"417cb14bbc3e6c23f6d1536124bfb2dd6acbc22610eb51266e827ca77c47207a"} Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.618064 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" podStartSLOduration=153.618042894 podStartE2EDuration="2m33.618042894s" podCreationTimestamp="2025-11-29 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:18:05.607322679 +0000 UTC m=+176.160852578" watchObservedRunningTime="2025-11-29 07:18:05.618042894 +0000 UTC m=+176.171572793" Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.660973 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.660956648 podStartE2EDuration="5.660956648s" podCreationTimestamp="2025-11-29 07:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:18:05.658970732 +0000 UTC m=+176.212500631" watchObservedRunningTime="2025-11-29 07:18:05.660956648 +0000 UTC m=+176.214486547" Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.778181 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:18:05 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:18:05 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:18:05 crc kubenswrapper[4660]: healthz check failed Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.778251 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.912043 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:18:05 crc kubenswrapper[4660]: I1129 07:18:05.987178 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.090045 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs4kt\" (UniqueName: \"kubernetes.io/projected/74fd06c4-6eb8-4056-ba52-e1260a0d4058-kube-api-access-vs4kt\") pod \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.090122 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74fd06c4-6eb8-4056-ba52-e1260a0d4058-secret-volume\") pod \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.090213 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74fd06c4-6eb8-4056-ba52-e1260a0d4058-config-volume\") pod \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\" (UID: \"74fd06c4-6eb8-4056-ba52-e1260a0d4058\") " Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.091440 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74fd06c4-6eb8-4056-ba52-e1260a0d4058-config-volume" (OuterVolumeSpecName: "config-volume") pod "74fd06c4-6eb8-4056-ba52-e1260a0d4058" (UID: "74fd06c4-6eb8-4056-ba52-e1260a0d4058"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.095387 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74fd06c4-6eb8-4056-ba52-e1260a0d4058-kube-api-access-vs4kt" (OuterVolumeSpecName: "kube-api-access-vs4kt") pod "74fd06c4-6eb8-4056-ba52-e1260a0d4058" (UID: "74fd06c4-6eb8-4056-ba52-e1260a0d4058"). InnerVolumeSpecName "kube-api-access-vs4kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.110742 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fd06c4-6eb8-4056-ba52-e1260a0d4058-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74fd06c4-6eb8-4056-ba52-e1260a0d4058" (UID: "74fd06c4-6eb8-4056-ba52-e1260a0d4058"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.192816 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74fd06c4-6eb8-4056-ba52-e1260a0d4058-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.192861 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74fd06c4-6eb8-4056-ba52-e1260a0d4058-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.192875 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs4kt\" (UniqueName: \"kubernetes.io/projected/74fd06c4-6eb8-4056-ba52-e1260a0d4058-kube-api-access-vs4kt\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.567497 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" event={"ID":"74fd06c4-6eb8-4056-ba52-e1260a0d4058","Type":"ContainerDied","Data":"170b7e8f62d9ac8bb31cf797c742f8406c02b4f8699fea8ce13d6b9401a76a21"} Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.567538 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="170b7e8f62d9ac8bb31cf797c742f8406c02b4f8699fea8ce13d6b9401a76a21" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.567784 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.777312 4660 patch_prober.go:28] interesting pod/router-default-5444994796-rbqps container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 29 07:18:06 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Nov 29 07:18:06 crc kubenswrapper[4660]: [+]process-running ok Nov 29 07:18:06 crc kubenswrapper[4660]: healthz check failed Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.777627 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbqps" podUID="8278af76-59f6-440c-a724-ee73498ea89f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.826458 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.918713 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06f1c34f-c458-4928-bdeb-f251abf5f975-kube-api-access\") pod \"06f1c34f-c458-4928-bdeb-f251abf5f975\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.918826 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06f1c34f-c458-4928-bdeb-f251abf5f975-kubelet-dir\") pod \"06f1c34f-c458-4928-bdeb-f251abf5f975\" (UID: \"06f1c34f-c458-4928-bdeb-f251abf5f975\") " Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.919036 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06f1c34f-c458-4928-bdeb-f251abf5f975-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "06f1c34f-c458-4928-bdeb-f251abf5f975" (UID: "06f1c34f-c458-4928-bdeb-f251abf5f975"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:06 crc kubenswrapper[4660]: I1129 07:18:06.942686 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f1c34f-c458-4928-bdeb-f251abf5f975-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "06f1c34f-c458-4928-bdeb-f251abf5f975" (UID: "06f1c34f-c458-4928-bdeb-f251abf5f975"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.019785 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06f1c34f-c458-4928-bdeb-f251abf5f975-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.019818 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06f1c34f-c458-4928-bdeb-f251abf5f975-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.578511 4660 generic.go:334] "Generic (PLEG): container finished" podID="f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf" containerID="417cb14bbc3e6c23f6d1536124bfb2dd6acbc22610eb51266e827ca77c47207a" exitCode=0 Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.578582 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf","Type":"ContainerDied","Data":"417cb14bbc3e6c23f6d1536124bfb2dd6acbc22610eb51266e827ca77c47207a"} Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.590294 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06f1c34f-c458-4928-bdeb-f251abf5f975","Type":"ContainerDied","Data":"62b6f346a4d252a29f777dd908e5dfd6e334ff74579f66897680c7ca4c8b7584"} Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.590335 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62b6f346a4d252a29f777dd908e5dfd6e334ff74579f66897680c7ca4c8b7584" Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.590375 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.776683 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:18:07 crc kubenswrapper[4660]: I1129 07:18:07.779603 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rbqps" Nov 29 07:18:08 crc kubenswrapper[4660]: I1129 07:18:08.926555 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.044578 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kubelet-dir\") pod \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.044672 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kube-api-access\") pod \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\" (UID: \"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf\") " Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.044750 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf" (UID: "f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.045097 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.066711 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf" (UID: "f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.147671 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.234024 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.604159 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf","Type":"ContainerDied","Data":"9508d851991f3b4a8acb94e4d2d7e0da903582ba9a11735078560ad4867c29dd"} Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.604199 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9508d851991f3b4a8acb94e4d2d7e0da903582ba9a11735078560ad4867c29dd" Nov 29 07:18:09 crc kubenswrapper[4660]: I1129 07:18:09.604250 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 29 07:18:14 crc kubenswrapper[4660]: I1129 07:18:14.405566 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-kpp2s" Nov 29 07:18:14 crc kubenswrapper[4660]: I1129 07:18:14.591467 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:18:14 crc kubenswrapper[4660]: I1129 07:18:14.596835 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:18:18 crc kubenswrapper[4660]: I1129 07:18:18.592646 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 29 07:18:19 crc kubenswrapper[4660]: I1129 07:18:19.241829 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:18:25 crc kubenswrapper[4660]: I1129 07:18:25.906785 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jxb6" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.668242 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:18:33 crc kubenswrapper[4660]: E1129 07:18:33.668814 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fd06c4-6eb8-4056-ba52-e1260a0d4058" containerName="collect-profiles" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.668832 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fd06c4-6eb8-4056-ba52-e1260a0d4058" containerName="collect-profiles" Nov 29 07:18:33 crc kubenswrapper[4660]: E1129 07:18:33.668848 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f1c34f-c458-4928-bdeb-f251abf5f975" containerName="pruner" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.668857 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f1c34f-c458-4928-bdeb-f251abf5f975" containerName="pruner" Nov 29 07:18:33 crc kubenswrapper[4660]: E1129 07:18:33.669064 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf" containerName="pruner" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.669074 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf" containerName="pruner" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.669189 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f1c34f-c458-4928-bdeb-f251abf5f975" containerName="pruner" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.669210 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fd06c4-6eb8-4056-ba52-e1260a0d4058" containerName="collect-profiles" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.669221 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7df9e7f-6f11-49ae-8ff6-0cc3699fb7cf" containerName="pruner" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.669673 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.673686 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.677784 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.691377 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.790020 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.790107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.891683 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.891736 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.891838 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.918449 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:33 crc kubenswrapper[4660]: I1129 07:18:33.993108 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:18:35 crc kubenswrapper[4660]: I1129 07:18:35.500431 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:18:35 crc kubenswrapper[4660]: I1129 07:18:35.500521 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:18:35 crc kubenswrapper[4660]: I1129 07:18:35.500602 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:18:35 crc kubenswrapper[4660]: I1129 07:18:35.516243 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:18:35 crc kubenswrapper[4660]: I1129 07:18:35.516569 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21" gracePeriod=600 Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.273588 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.275315 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.278206 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.281058 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.282399 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-var-lock\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.282829 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/908e5789-bf04-4249-8e14-2573398bd1c3-kube-api-access\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.384347 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.384399 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-var-lock\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.384422 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/908e5789-bf04-4249-8e14-2573398bd1c3-kube-api-access\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.384472 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.384538 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-var-lock\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.401646 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/908e5789-bf04-4249-8e14-2573398bd1c3-kube-api-access\") pod \"installer-9-crc\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:39 crc kubenswrapper[4660]: I1129 07:18:39.592418 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:18:45 crc kubenswrapper[4660]: I1129 07:18:45.837242 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21" exitCode=0 Nov 29 07:18:45 crc kubenswrapper[4660]: I1129 07:18:45.837302 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21"} Nov 29 07:19:01 crc kubenswrapper[4660]: E1129 07:19:01.094056 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage2950509288/3\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 07:19:01 crc kubenswrapper[4660]: E1129 07:19:01.094803 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z49zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tk9c4_openshift-marketplace(0787f5de-a9f4-435c-8553-fcb080d3950b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage2950509288/3\": happened during read: context canceled" logger="UnhandledError" Nov 29 07:19:01 crc kubenswrapper[4660]: E1129 07:19:01.096026 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage2950509288/3\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tk9c4" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" Nov 29 07:19:07 crc kubenswrapper[4660]: E1129 07:19:07.248159 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:19:07 crc kubenswrapper[4660]: E1129 07:19:07.248793 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkv59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mdtvz_openshift-marketplace(b38d4bdc-266e-423b-89e8-4bea085d5ce7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:19:07 crc kubenswrapper[4660]: E1129 07:19:07.250366 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mdtvz" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" Nov 29 07:19:08 crc kubenswrapper[4660]: E1129 07:19:08.904112 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mdtvz" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" Nov 29 07:19:08 crc kubenswrapper[4660]: E1129 07:19:08.986349 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 29 07:19:08 crc kubenswrapper[4660]: E1129 07:19:08.986515 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7tpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m5w6w_openshift-marketplace(3d455272-6d6e-4fa8-8a59-60ddcaf10ab2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:19:08 crc kubenswrapper[4660]: E1129 07:19:08.988994 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m5w6w" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" Nov 29 07:19:09 crc kubenswrapper[4660]: E1129 07:19:09.001530 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:19:09 crc kubenswrapper[4660]: E1129 07:19:09.001691 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnxjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-swwtq_openshift-marketplace(07c2303f-89f5-4280-8830-05e28e5a1d96): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:19:09 crc kubenswrapper[4660]: E1129 07:19:09.002845 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-swwtq" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" Nov 29 07:19:09 crc kubenswrapper[4660]: E1129 07:19:09.034969 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 29 07:19:09 crc kubenswrapper[4660]: E1129 07:19:09.035118 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jr596,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vw7mz_openshift-marketplace(2071aaa8-38a7-47d8-bf67-b3862af09221): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:19:09 crc kubenswrapper[4660]: E1129 07:19:09.036297 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vw7mz" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.019237 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-swwtq" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.019287 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m5w6w" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.019329 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vw7mz" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.060707 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.060852 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrt8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wd4st_openshift-marketplace(c165ea6a-e592-4d7f-b35c-314fd0bf1cbf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.061996 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wd4st" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.093745 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.094129 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tn5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-h6mtm_openshift-marketplace(a157b019-1b17-4d7e-8a47-868b3d24496f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:19:12 crc kubenswrapper[4660]: E1129 07:19:12.095321 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-h6mtm" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" Nov 29 07:19:13 crc kubenswrapper[4660]: E1129 07:19:13.229789 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-h6mtm" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" Nov 29 07:19:13 crc kubenswrapper[4660]: E1129 07:19:13.286978 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 29 07:19:13 crc kubenswrapper[4660]: E1129 07:19:13.287138 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6p7lw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-blz5c_openshift-marketplace(36c3fcb8-412f-4ea8-8da8-0fb4308886d0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:19:13 crc kubenswrapper[4660]: E1129 07:19:13.288850 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-blz5c" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" Nov 29 07:19:13 crc kubenswrapper[4660]: I1129 07:19:13.524582 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 29 07:19:13 crc kubenswrapper[4660]: W1129 07:19:13.545600 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod908e5789_bf04_4249_8e14_2573398bd1c3.slice/crio-a3c7334f0dcc045174dfc86c68581736f2946135ab63b051e0559f00301674b4 WatchSource:0}: Error finding container a3c7334f0dcc045174dfc86c68581736f2946135ab63b051e0559f00301674b4: Status 404 returned error can't find the container with id a3c7334f0dcc045174dfc86c68581736f2946135ab63b051e0559f00301674b4 Nov 29 07:19:13 crc kubenswrapper[4660]: I1129 07:19:13.690549 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 29 07:19:13 crc kubenswrapper[4660]: W1129 07:19:13.706421 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod98c0a460_c58b_416a_b04c_d9fc95edc7dc.slice/crio-f42cbcc408f5f1a451233fb42814396963a1fea430970e00d8140a2d1becc0f5 WatchSource:0}: Error finding container f42cbcc408f5f1a451233fb42814396963a1fea430970e00d8140a2d1becc0f5: Status 404 returned error can't find the container with id f42cbcc408f5f1a451233fb42814396963a1fea430970e00d8140a2d1becc0f5 Nov 29 07:19:14 crc kubenswrapper[4660]: I1129 07:19:14.024797 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"989dd0952d000cd4f49140f82c7a75fb3526482c195db59f4cc4c65df85512c5"} Nov 29 07:19:14 crc kubenswrapper[4660]: I1129 07:19:14.029449 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"98c0a460-c58b-416a-b04c-d9fc95edc7dc","Type":"ContainerStarted","Data":"f42cbcc408f5f1a451233fb42814396963a1fea430970e00d8140a2d1becc0f5"} Nov 29 07:19:14 crc kubenswrapper[4660]: I1129 07:19:14.035262 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"908e5789-bf04-4249-8e14-2573398bd1c3","Type":"ContainerStarted","Data":"6b567656d7a9741cd755d5829b23c33b4781c07ad337acce7aa9410f12e7ca6b"} Nov 29 07:19:14 crc kubenswrapper[4660]: I1129 07:19:14.035380 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"908e5789-bf04-4249-8e14-2573398bd1c3","Type":"ContainerStarted","Data":"a3c7334f0dcc045174dfc86c68581736f2946135ab63b051e0559f00301674b4"} Nov 29 07:19:15 crc kubenswrapper[4660]: I1129 07:19:15.041760 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tk9c4" event={"ID":"0787f5de-a9f4-435c-8553-fcb080d3950b","Type":"ContainerStarted","Data":"328cea9a11a78a7ebdfe0e2ccc44e4ad823f0ab4e88c4015b6b24a09be16949f"} Nov 29 07:19:15 crc kubenswrapper[4660]: I1129 07:19:15.043489 4660 generic.go:334] "Generic (PLEG): container finished" podID="98c0a460-c58b-416a-b04c-d9fc95edc7dc" containerID="07cd74d0405d12e321ae8eb7859033de4061cbb7aa473e719acc14ed59fdb992" exitCode=0 Nov 29 07:19:15 crc kubenswrapper[4660]: I1129 07:19:15.044215 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"98c0a460-c58b-416a-b04c-d9fc95edc7dc","Type":"ContainerDied","Data":"07cd74d0405d12e321ae8eb7859033de4061cbb7aa473e719acc14ed59fdb992"} Nov 29 07:19:15 crc kubenswrapper[4660]: I1129 07:19:15.066936 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=36.066913619 podStartE2EDuration="36.066913619s" podCreationTimestamp="2025-11-29 07:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:19:14.063235083 +0000 UTC m=+244.616764982" watchObservedRunningTime="2025-11-29 07:19:15.066913619 +0000 UTC m=+245.620443518" Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.053679 4660 generic.go:334] "Generic (PLEG): container finished" podID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerID="328cea9a11a78a7ebdfe0e2ccc44e4ad823f0ab4e88c4015b6b24a09be16949f" exitCode=0 Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.053858 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tk9c4" event={"ID":"0787f5de-a9f4-435c-8553-fcb080d3950b","Type":"ContainerDied","Data":"328cea9a11a78a7ebdfe0e2ccc44e4ad823f0ab4e88c4015b6b24a09be16949f"} Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.299334 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.486128 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kubelet-dir\") pod \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.486405 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kube-api-access\") pod \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\" (UID: \"98c0a460-c58b-416a-b04c-d9fc95edc7dc\") " Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.486200 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98c0a460-c58b-416a-b04c-d9fc95edc7dc" (UID: "98c0a460-c58b-416a-b04c-d9fc95edc7dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.486600 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.491385 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98c0a460-c58b-416a-b04c-d9fc95edc7dc" (UID: "98c0a460-c58b-416a-b04c-d9fc95edc7dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:19:16 crc kubenswrapper[4660]: I1129 07:19:16.587495 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c0a460-c58b-416a-b04c-d9fc95edc7dc-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:17 crc kubenswrapper[4660]: I1129 07:19:17.061818 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tk9c4" event={"ID":"0787f5de-a9f4-435c-8553-fcb080d3950b","Type":"ContainerStarted","Data":"b6682ab2c45b4946e69acc5808cc22317776167390825b0f42ce9102c13bfd3d"} Nov 29 07:19:17 crc kubenswrapper[4660]: I1129 07:19:17.066294 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"98c0a460-c58b-416a-b04c-d9fc95edc7dc","Type":"ContainerDied","Data":"f42cbcc408f5f1a451233fb42814396963a1fea430970e00d8140a2d1becc0f5"} Nov 29 07:19:17 crc kubenswrapper[4660]: I1129 07:19:17.066529 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42cbcc408f5f1a451233fb42814396963a1fea430970e00d8140a2d1becc0f5" Nov 29 07:19:17 crc kubenswrapper[4660]: I1129 07:19:17.066676 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 29 07:19:17 crc kubenswrapper[4660]: I1129 07:19:17.395979 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:19:17 crc kubenswrapper[4660]: I1129 07:19:17.396029 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:19:18 crc kubenswrapper[4660]: I1129 07:19:18.452484 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tk9c4" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="registry-server" probeResult="failure" output=< Nov 29 07:19:18 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:19:18 crc kubenswrapper[4660]: > Nov 29 07:19:22 crc kubenswrapper[4660]: I1129 07:19:22.720287 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tk9c4" podStartSLOduration=15.628664367 podStartE2EDuration="1m26.720268685s" podCreationTimestamp="2025-11-29 07:17:56 +0000 UTC" firstStartedPulling="2025-11-29 07:18:05.556275312 +0000 UTC m=+176.109805211" lastFinishedPulling="2025-11-29 07:19:16.64787959 +0000 UTC m=+247.201409529" observedRunningTime="2025-11-29 07:19:17.088891919 +0000 UTC m=+247.642421818" watchObservedRunningTime="2025-11-29 07:19:22.720268685 +0000 UTC m=+253.273798584" Nov 29 07:19:25 crc kubenswrapper[4660]: I1129 07:19:25.104558 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdtvz" event={"ID":"b38d4bdc-266e-423b-89e8-4bea085d5ce7","Type":"ContainerStarted","Data":"5e7d5901485f5a9b1a956e524bf43a31e464d3a9b4cad0c92f9202607507a9c2"} Nov 29 07:19:25 crc kubenswrapper[4660]: I1129 07:19:25.926160 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8dwgp"] Nov 29 07:19:26 crc kubenswrapper[4660]: I1129 07:19:26.111812 4660 generic.go:334] "Generic (PLEG): container finished" podID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerID="21098d62a94bf8a81f08385de10ca4771d540ba5f769f3eae41ca1c6a3394338" exitCode=0 Nov 29 07:19:26 crc kubenswrapper[4660]: I1129 07:19:26.112014 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-blz5c" event={"ID":"36c3fcb8-412f-4ea8-8da8-0fb4308886d0","Type":"ContainerDied","Data":"21098d62a94bf8a81f08385de10ca4771d540ba5f769f3eae41ca1c6a3394338"} Nov 29 07:19:26 crc kubenswrapper[4660]: I1129 07:19:26.114972 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vw7mz" event={"ID":"2071aaa8-38a7-47d8-bf67-b3862af09221","Type":"ContainerStarted","Data":"7a06e917a8c63a2aa9a3f5148c2a14236aeea6bd19f62e1a21646380cb098bd6"} Nov 29 07:19:26 crc kubenswrapper[4660]: I1129 07:19:26.119057 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5w6w" event={"ID":"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2","Type":"ContainerStarted","Data":"714152624d7f3a029082b06ccc2c68cb37042c99720f30e6c6d9f5e04c78880a"} Nov 29 07:19:26 crc kubenswrapper[4660]: I1129 07:19:26.121083 4660 generic.go:334] "Generic (PLEG): container finished" podID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerID="5e7d5901485f5a9b1a956e524bf43a31e464d3a9b4cad0c92f9202607507a9c2" exitCode=0 Nov 29 07:19:26 crc kubenswrapper[4660]: I1129 07:19:26.121121 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdtvz" event={"ID":"b38d4bdc-266e-423b-89e8-4bea085d5ce7","Type":"ContainerDied","Data":"5e7d5901485f5a9b1a956e524bf43a31e464d3a9b4cad0c92f9202607507a9c2"} Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.127860 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdtvz" event={"ID":"b38d4bdc-266e-423b-89e8-4bea085d5ce7","Type":"ContainerStarted","Data":"8dedc36190a8daa62a9548e928c1fcaee53941d1017cc07debcd67e094c5b977"} Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.131577 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-blz5c" event={"ID":"36c3fcb8-412f-4ea8-8da8-0fb4308886d0","Type":"ContainerStarted","Data":"cacd850a3d61df419caf28da56d366749a0191875caf719d391e8bd0491818ea"} Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.134028 4660 generic.go:334] "Generic (PLEG): container finished" podID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerID="7a06e917a8c63a2aa9a3f5148c2a14236aeea6bd19f62e1a21646380cb098bd6" exitCode=0 Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.134109 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vw7mz" event={"ID":"2071aaa8-38a7-47d8-bf67-b3862af09221","Type":"ContainerDied","Data":"7a06e917a8c63a2aa9a3f5148c2a14236aeea6bd19f62e1a21646380cb098bd6"} Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.137022 4660 generic.go:334] "Generic (PLEG): container finished" podID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerID="3e74eb081deec91621ee3046be952b14689a15b434e7458c093f1a45355f5232" exitCode=0 Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.137075 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swwtq" event={"ID":"07c2303f-89f5-4280-8830-05e28e5a1d96","Type":"ContainerDied","Data":"3e74eb081deec91621ee3046be952b14689a15b434e7458c093f1a45355f5232"} Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.141672 4660 generic.go:334] "Generic (PLEG): container finished" podID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerID="714152624d7f3a029082b06ccc2c68cb37042c99720f30e6c6d9f5e04c78880a" exitCode=0 Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.141714 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5w6w" event={"ID":"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2","Type":"ContainerDied","Data":"714152624d7f3a029082b06ccc2c68cb37042c99720f30e6c6d9f5e04c78880a"} Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.156098 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mdtvz" podStartSLOduration=3.81385392 podStartE2EDuration="1m33.156077501s" podCreationTimestamp="2025-11-29 07:17:54 +0000 UTC" firstStartedPulling="2025-11-29 07:17:57.359242457 +0000 UTC m=+167.912772356" lastFinishedPulling="2025-11-29 07:19:26.701466038 +0000 UTC m=+257.254995937" observedRunningTime="2025-11-29 07:19:27.152565779 +0000 UTC m=+257.706095678" watchObservedRunningTime="2025-11-29 07:19:27.156077501 +0000 UTC m=+257.709607400" Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.207763 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-blz5c" podStartSLOduration=9.160772773 podStartE2EDuration="1m30.207743113s" podCreationTimestamp="2025-11-29 07:17:57 +0000 UTC" firstStartedPulling="2025-11-29 07:18:05.575305046 +0000 UTC m=+176.128834945" lastFinishedPulling="2025-11-29 07:19:26.622275386 +0000 UTC m=+257.175805285" observedRunningTime="2025-11-29 07:19:27.179009275 +0000 UTC m=+257.732539194" watchObservedRunningTime="2025-11-29 07:19:27.207743113 +0000 UTC m=+257.761273012" Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.439413 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.495601 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.559346 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:19:27 crc kubenswrapper[4660]: I1129 07:19:27.561124 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:19:28 crc kubenswrapper[4660]: I1129 07:19:28.150036 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6mtm" event={"ID":"a157b019-1b17-4d7e-8a47-868b3d24496f","Type":"ContainerStarted","Data":"ceba46ebcf588474bc34ad4414bcad73d3f43e76710633c48f8bc5e44c2fd2ba"} Nov 29 07:19:28 crc kubenswrapper[4660]: I1129 07:19:28.615315 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-blz5c" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="registry-server" probeResult="failure" output=< Nov 29 07:19:28 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:19:28 crc kubenswrapper[4660]: > Nov 29 07:19:32 crc kubenswrapper[4660]: I1129 07:19:32.175083 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd4st" event={"ID":"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf","Type":"ContainerStarted","Data":"9c63e8aea470e713620d42e2c24fefd573d0c8d0315538fa1b2a6bd6b7835cad"} Nov 29 07:19:32 crc kubenswrapper[4660]: I1129 07:19:32.176559 4660 generic.go:334] "Generic (PLEG): container finished" podID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerID="ceba46ebcf588474bc34ad4414bcad73d3f43e76710633c48f8bc5e44c2fd2ba" exitCode=0 Nov 29 07:19:32 crc kubenswrapper[4660]: I1129 07:19:32.176591 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6mtm" event={"ID":"a157b019-1b17-4d7e-8a47-868b3d24496f","Type":"ContainerDied","Data":"ceba46ebcf588474bc34ad4414bcad73d3f43e76710633c48f8bc5e44c2fd2ba"} Nov 29 07:19:34 crc kubenswrapper[4660]: I1129 07:19:34.189804 4660 generic.go:334] "Generic (PLEG): container finished" podID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerID="9c63e8aea470e713620d42e2c24fefd573d0c8d0315538fa1b2a6bd6b7835cad" exitCode=0 Nov 29 07:19:34 crc kubenswrapper[4660]: I1129 07:19:34.189892 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd4st" event={"ID":"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf","Type":"ContainerDied","Data":"9c63e8aea470e713620d42e2c24fefd573d0c8d0315538fa1b2a6bd6b7835cad"} Nov 29 07:19:35 crc kubenswrapper[4660]: I1129 07:19:35.142502 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:19:35 crc kubenswrapper[4660]: I1129 07:19:35.142958 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:19:35 crc kubenswrapper[4660]: I1129 07:19:35.542585 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:19:35 crc kubenswrapper[4660]: I1129 07:19:35.601490 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:19:37 crc kubenswrapper[4660]: I1129 07:19:37.597920 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:19:37 crc kubenswrapper[4660]: I1129 07:19:37.652666 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:19:37 crc kubenswrapper[4660]: I1129 07:19:37.944311 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-blz5c"] Nov 29 07:19:39 crc kubenswrapper[4660]: I1129 07:19:39.219398 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-blz5c" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="registry-server" containerID="cri-o://cacd850a3d61df419caf28da56d366749a0191875caf719d391e8bd0491818ea" gracePeriod=2 Nov 29 07:19:41 crc kubenswrapper[4660]: E1129 07:19:41.970403 4660 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36c3fcb8_412f_4ea8_8da8_0fb4308886d0.slice/crio-conmon-cacd850a3d61df419caf28da56d366749a0191875caf719d391e8bd0491818ea.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:19:42 crc kubenswrapper[4660]: I1129 07:19:42.238386 4660 generic.go:334] "Generic (PLEG): container finished" podID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerID="cacd850a3d61df419caf28da56d366749a0191875caf719d391e8bd0491818ea" exitCode=0 Nov 29 07:19:42 crc kubenswrapper[4660]: I1129 07:19:42.238477 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-blz5c" event={"ID":"36c3fcb8-412f-4ea8-8da8-0fb4308886d0","Type":"ContainerDied","Data":"cacd850a3d61df419caf28da56d366749a0191875caf719d391e8bd0491818ea"} Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.225461 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.264666 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-blz5c" event={"ID":"36c3fcb8-412f-4ea8-8da8-0fb4308886d0","Type":"ContainerDied","Data":"7baf811728a1629c92d6e3e952ddca86f2b854628033336434bff1bc13223453"} Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.264729 4660 scope.go:117] "RemoveContainer" containerID="cacd850a3d61df419caf28da56d366749a0191875caf719d391e8bd0491818ea" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.264744 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-blz5c" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.269582 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-catalog-content\") pod \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.290890 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36c3fcb8-412f-4ea8-8da8-0fb4308886d0" (UID: "36c3fcb8-412f-4ea8-8da8-0fb4308886d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.370822 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p7lw\" (UniqueName: \"kubernetes.io/projected/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-kube-api-access-6p7lw\") pod \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.370911 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-utilities\") pod \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\" (UID: \"36c3fcb8-412f-4ea8-8da8-0fb4308886d0\") " Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.371085 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.372038 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-utilities" (OuterVolumeSpecName: "utilities") pod "36c3fcb8-412f-4ea8-8da8-0fb4308886d0" (UID: "36c3fcb8-412f-4ea8-8da8-0fb4308886d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.377097 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-kube-api-access-6p7lw" (OuterVolumeSpecName: "kube-api-access-6p7lw") pod "36c3fcb8-412f-4ea8-8da8-0fb4308886d0" (UID: "36c3fcb8-412f-4ea8-8da8-0fb4308886d0"). InnerVolumeSpecName "kube-api-access-6p7lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.472043 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p7lw\" (UniqueName: \"kubernetes.io/projected/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-kube-api-access-6p7lw\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.472100 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36c3fcb8-412f-4ea8-8da8-0fb4308886d0-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.594877 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-blz5c"] Nov 29 07:19:46 crc kubenswrapper[4660]: I1129 07:19:46.598756 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-blz5c"] Nov 29 07:19:47 crc kubenswrapper[4660]: I1129 07:19:47.701429 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" path="/var/lib/kubelet/pods/36c3fcb8-412f-4ea8-8da8-0fb4308886d0/volumes" Nov 29 07:19:50 crc kubenswrapper[4660]: I1129 07:19:50.970374 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" containerName="oauth-openshift" containerID="cri-o://c70417ddc3b9603f65aefca39955d9c043718df8acea820cde0d5454e9f1a7a7" gracePeriod=15 Nov 29 07:19:51 crc kubenswrapper[4660]: I1129 07:19:51.417525 4660 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:19:51 crc kubenswrapper[4660]: I1129 07:19:51.417888 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8" gracePeriod=15 Nov 29 07:19:51 crc kubenswrapper[4660]: I1129 07:19:51.417950 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828" gracePeriod=15 Nov 29 07:19:51 crc kubenswrapper[4660]: I1129 07:19:51.418013 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb" gracePeriod=15 Nov 29 07:19:51 crc kubenswrapper[4660]: I1129 07:19:51.418069 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8" gracePeriod=15 Nov 29 07:19:51 crc kubenswrapper[4660]: I1129 07:19:51.418109 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5" gracePeriod=15 Nov 29 07:19:52 crc kubenswrapper[4660]: I1129 07:19:52.595012 4660 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596000 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596020 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596035 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c0a460-c58b-416a-b04c-d9fc95edc7dc" containerName="pruner" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596043 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c0a460-c58b-416a-b04c-d9fc95edc7dc" containerName="pruner" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596053 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596061 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596077 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596084 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596095 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="extract-utilities" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596103 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="extract-utilities" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596301 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="extract-content" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596309 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="extract-content" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596319 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596326 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596336 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="registry-server" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596344 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="registry-server" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596355 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596363 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:52.596374 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596381 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596488 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596504 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596522 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596536 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c3fcb8-412f-4ea8-8da8-0fb4308886d0" containerName="registry-server" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596549 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596558 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c0a460-c58b-416a-b04c-d9fc95edc7dc" containerName="pruner" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.596566 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.598331 4660 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.599151 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653559 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653674 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653727 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653763 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653805 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653847 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653894 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.653974 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763561 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763683 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763720 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763752 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763776 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763808 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763847 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763891 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.763998 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.764055 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.764097 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.764135 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.764175 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.764211 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.764249 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:52.764281 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:54.153006 4660 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8dwgp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:54.153370 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:54.154388 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.129.56.165:6443: connect: connection refused" event=< Nov 29 07:20:04 crc kubenswrapper[4660]: &Event{ObjectMeta:{oauth-openshift-558db77b4-8dwgp.187c692c90ed4c68 openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-8dwgp,UID:3069d78e-6be2-46bf-baae-bbe2ccf0b06b,APIVersion:v1,ResourceVersion:27246,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.17:6443/healthz": dial tcp 10.217.0.17:6443: connect: connection refused Nov 29 07:20:04 crc kubenswrapper[4660]: body: Nov 29 07:20:04 crc kubenswrapper[4660]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,LastTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Nov 29 07:20:04 crc kubenswrapper[4660]: > Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:54.314056 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:54.314789 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8" exitCode=2 Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:55.320998 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:55.321763 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb" exitCode=0 Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:57.312799 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.129.56.165:6443: connect: connection refused" event=< Nov 29 07:20:04 crc kubenswrapper[4660]: &Event{ObjectMeta:{oauth-openshift-558db77b4-8dwgp.187c692c90ed4c68 openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-8dwgp,UID:3069d78e-6be2-46bf-baae-bbe2ccf0b06b,APIVersion:v1,ResourceVersion:27246,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.17:6443/healthz": dial tcp 10.217.0.17:6443: connect: connection refused Nov 29 07:20:04 crc kubenswrapper[4660]: body: Nov 29 07:20:04 crc kubenswrapper[4660]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,LastTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Nov 29 07:20:04 crc kubenswrapper[4660]: > Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:19:57.623668 4660 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.165:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:57.624146 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.350791 4660 generic.go:334] "Generic (PLEG): container finished" podID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" containerID="c70417ddc3b9603f65aefca39955d9c043718df8acea820cde0d5454e9f1a7a7" exitCode=0 Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.350893 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" event={"ID":"3069d78e-6be2-46bf-baae-bbe2ccf0b06b","Type":"ContainerDied","Data":"c70417ddc3b9603f65aefca39955d9c043718df8acea820cde0d5454e9f1a7a7"} Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.352849 4660 generic.go:334] "Generic (PLEG): container finished" podID="908e5789-bf04-4249-8e14-2573398bd1c3" containerID="6b567656d7a9741cd755d5829b23c33b4781c07ad337acce7aa9410f12e7ca6b" exitCode=0 Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.352927 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"908e5789-bf04-4249-8e14-2573398bd1c3","Type":"ContainerDied","Data":"6b567656d7a9741cd755d5829b23c33b4781c07ad337acce7aa9410f12e7ca6b"} Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.353874 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.356254 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.357274 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828" exitCode=0 Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.357295 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5" exitCode=0 Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:19:59.699642 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:00.366223 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:00.367137 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8" exitCode=0 Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:01.669899 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:01.670398 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:01.670757 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:01.671206 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:01.671453 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.671474 4660 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:01.671743 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="200ms" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.834092 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.834909 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:01.873305 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="400ms" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.977313 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-kubelet-dir\") pod \"908e5789-bf04-4249-8e14-2573398bd1c3\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.977424 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/908e5789-bf04-4249-8e14-2573398bd1c3-kube-api-access\") pod \"908e5789-bf04-4249-8e14-2573398bd1c3\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.977432 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "908e5789-bf04-4249-8e14-2573398bd1c3" (UID: "908e5789-bf04-4249-8e14-2573398bd1c3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.977463 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-var-lock\") pod \"908e5789-bf04-4249-8e14-2573398bd1c3\" (UID: \"908e5789-bf04-4249-8e14-2573398bd1c3\") " Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.977486 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-var-lock" (OuterVolumeSpecName: "var-lock") pod "908e5789-bf04-4249-8e14-2573398bd1c3" (UID: "908e5789-bf04-4249-8e14-2573398bd1c3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.977801 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.977820 4660 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/908e5789-bf04-4249-8e14-2573398bd1c3-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:01.983245 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908e5789-bf04-4249-8e14-2573398bd1c3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "908e5789-bf04-4249-8e14-2573398bd1c3" (UID: "908e5789-bf04-4249-8e14-2573398bd1c3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:02.078922 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/908e5789-bf04-4249-8e14-2573398bd1c3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:02.273975 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="800ms" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:02.382372 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"908e5789-bf04-4249-8e14-2573398bd1c3","Type":"ContainerDied","Data":"a3c7334f0dcc045174dfc86c68581736f2946135ab63b051e0559f00301674b4"} Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:02.382409 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c7334f0dcc045174dfc86c68581736f2946135ab63b051e0559f00301674b4" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:02.382470 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:02.406133 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:03.075933 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="1.6s" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:04.152983 4660 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8dwgp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:04.153050 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Nov 29 07:20:04 crc kubenswrapper[4660]: E1129 07:20:04.677076 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="3.2s" Nov 29 07:20:04 crc kubenswrapper[4660]: I1129 07:20:04.988909 4660 scope.go:117] "RemoveContainer" containerID="21098d62a94bf8a81f08385de10ca4771d540ba5f769f3eae41ca1c6a3394338" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.105660 4660 scope.go:117] "RemoveContainer" containerID="3ed1a1c4963b2173b8a6520c7053cf0196c633da18584f68cc266bb19f1627ea" Nov 29 07:20:05 crc kubenswrapper[4660]: W1129 07:20:05.110939 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1bfe078f21cc6794af2292d77ff649505d93adabb67188110511a1972b2c4eba WatchSource:0}: Error finding container 1bfe078f21cc6794af2292d77ff649505d93adabb67188110511a1972b2c4eba: Status 404 returned error can't find the container with id 1bfe078f21cc6794af2292d77ff649505d93adabb67188110511a1972b2c4eba Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.162108 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.163260 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.164017 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.165603 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.245658 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.246103 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.246186 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.246293 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.246390 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.246647 4660 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.246674 4660 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.246711 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.347413 4660 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.416477 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.417505 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.417823 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.417989 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1bfe078f21cc6794af2292d77ff649505d93adabb67188110511a1972b2c4eba"} Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.418180 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.432727 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.434111 4660 scope.go:117] "RemoveContainer" containerID="85fd3a6f607ddae646e1497063af7428566e3bed3f3ab4a641dce082f6424828" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.434223 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.443364 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd4st" event={"ID":"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf","Type":"ContainerStarted","Data":"4fab74a06b7cc6834c7fa30ac94fa28a94ca73eb4c3d1daf55ef28b384c16121"} Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.448431 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-session\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.448666 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-login\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.448844 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqnt2\" (UniqueName: \"kubernetes.io/projected/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-kube-api-access-kqnt2\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.448993 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-ocp-branding-template\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.449025 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-cliconfig\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.449325 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-serving-cert\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.449365 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-service-ca\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.449590 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-trusted-ca-bundle\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.449761 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-router-certs\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.450027 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-policies\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.450225 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-idp-0-file-data\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.450260 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-provider-selection\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.450447 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-dir\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.450804 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-error\") pod \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\" (UID: \"3069d78e-6be2-46bf-baae-bbe2ccf0b06b\") " Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.451295 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.456033 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.456768 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.456942 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.457094 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.457553 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-kube-api-access-kqnt2" (OuterVolumeSpecName: "kube-api-access-kqnt2") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "kube-api-access-kqnt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.458004 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.458879 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.459087 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.459131 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.459778 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.459828 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.460292 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.466529 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vw7mz" event={"ID":"2071aaa8-38a7-47d8-bf67-b3862af09221","Type":"ContainerStarted","Data":"19f318bddb3ab862bb442b59e0ca1e3261347e9f441e9fb645d7cf810b52efa3"} Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.467677 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.468766 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.468853 4660 scope.go:117] "RemoveContainer" containerID="bde5d15295b343288e9f7d636105c25a0774f238c144e51bd694a956fbfc6bdb" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.469221 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.469776 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.471385 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.483445 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.483517 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.483544 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3069d78e-6be2-46bf-baae-bbe2ccf0b06b" (UID: "3069d78e-6be2-46bf-baae-bbe2ccf0b06b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.485431 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" event={"ID":"3069d78e-6be2-46bf-baae-bbe2ccf0b06b","Type":"ContainerDied","Data":"cb368e7f6e1e2e3544d6a9c7babcb38204cb960ee3a595accd1347a7dd6a6de7"} Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.485540 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.487294 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.487727 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.487987 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.488316 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.502115 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5w6w" event={"ID":"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2","Type":"ContainerStarted","Data":"ca6b3a498863d451a26ad43b9417b7ccafaa8d60fdee5caf31aca2d1ce4c9eb4"} Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.503223 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.503722 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.505316 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.508133 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.508434 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.517479 4660 scope.go:117] "RemoveContainer" containerID="a1ece2aaad2813ca8c7c1d3b7a1b546d2d09c21e70bd3a2e3986a843cd509ba5" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.517955 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.518376 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.518728 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.519015 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.519254 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.537553 4660 scope.go:117] "RemoveContainer" containerID="7adab4c61245c00ca2418e6ec39cddff779a7b65460c56fec20b6b97b529a0b8" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552410 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552711 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552729 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552741 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552755 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552767 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552778 4660 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552790 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552803 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552816 4660 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552827 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552838 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552851 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.552865 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqnt2\" (UniqueName: \"kubernetes.io/projected/3069d78e-6be2-46bf-baae-bbe2ccf0b06b-kube-api-access-kqnt2\") on node \"crc\" DevicePath \"\"" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.554873 4660 scope.go:117] "RemoveContainer" containerID="f201607dfe44464d3f72c80d9fa61445582110857f8b910d466be9afd90ca3a8" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.571383 4660 scope.go:117] "RemoveContainer" containerID="5e021d4a5d783d6074900e5949ae585917d6a1b85aae45116b7182e3c3157843" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.585897 4660 scope.go:117] "RemoveContainer" containerID="c70417ddc3b9603f65aefca39955d9c043718df8acea820cde0d5454e9f1a7a7" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.620078 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.620264 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:20:05 crc kubenswrapper[4660]: I1129 07:20:05.701658 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.513955 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6mtm" event={"ID":"a157b019-1b17-4d7e-8a47-868b3d24496f","Type":"ContainerStarted","Data":"e661d2fd77aeb8c999a075d9343c1d1ab5a02cf343f953d4900435154b17dcc2"} Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.515112 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.515508 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.515960 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.516015 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"11c671a95cad8101a35d7a3e02bdd89b8d866ba918d977ec38f6eb4b898a9142"} Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.516318 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.516731 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: E1129 07:20:06.516794 4660 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.165:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.517115 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.517477 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.517871 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.518091 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.518549 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.520812 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swwtq" event={"ID":"07c2303f-89f5-4280-8830-05e28e5a1d96","Type":"ContainerStarted","Data":"336741f347ff9706ab357fa16ae49a7b574baf5667b22606fe6fe6e71141afec"} Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.521415 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.522021 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.523020 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.523673 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.524181 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.524545 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.525120 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.525584 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.525819 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.525983 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.526144 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.526362 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.526525 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: E1129 07:20:06.608402 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6dbcf185d7aecc64ad77a55ea28c8b2e46764d0976aa0cb7b7e359d7db7c7d99\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aa3b0a1844156935b9005476446d3a6e00eeaa29c793005658af056bb3739900\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201826122},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:c5557f591ad262ac58cc846c9df3d969fa5f7fd6add59f66382b641fc094b666\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:de047e9528b5f34e1262bafad52130ebf24f80265b580495e6d2618ca9afbbd7\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201183569},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: E1129 07:20:06.608792 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: E1129 07:20:06.609272 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: E1129 07:20:06.609881 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: E1129 07:20:06.610127 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:06 crc kubenswrapper[4660]: E1129 07:20:06.610149 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:20:06 crc kubenswrapper[4660]: I1129 07:20:06.674052 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m5w6w" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="registry-server" probeResult="failure" output=< Nov 29 07:20:06 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:20:06 crc kubenswrapper[4660]: > Nov 29 07:20:07 crc kubenswrapper[4660]: E1129 07:20:07.314731 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.129.56.165:6443: connect: connection refused" event=< Nov 29 07:20:07 crc kubenswrapper[4660]: &Event{ObjectMeta:{oauth-openshift-558db77b4-8dwgp.187c692c90ed4c68 openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-8dwgp,UID:3069d78e-6be2-46bf-baae-bbe2ccf0b06b,APIVersion:v1,ResourceVersion:27246,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.17:6443/healthz": dial tcp 10.217.0.17:6443: connect: connection refused Nov 29 07:20:07 crc kubenswrapper[4660]: body: Nov 29 07:20:07 crc kubenswrapper[4660]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,LastTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Nov 29 07:20:07 crc kubenswrapper[4660]: > Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.423830 4660 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.423921 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.528814 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.528860 4660 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8" exitCode=1 Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.528953 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8"} Nov 29 07:20:07 crc kubenswrapper[4660]: E1129 07:20:07.529391 4660 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.165:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.529411 4660 scope.go:117] "RemoveContainer" containerID="9631c80af5cd0b3b9d827abdf17fe5bb039b282ca568fae42ec8b31abffa30a8" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.530069 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.530492 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.531033 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.531372 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.531712 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.532086 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.532547 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.532815 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:07 crc kubenswrapper[4660]: I1129 07:20:07.851316 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:20:07 crc kubenswrapper[4660]: E1129 07:20:07.879139 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="6.4s" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.437046 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.437753 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.538119 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.538302 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9eb08bf01d66daa77f1cb6651282d8ce00c219d2b5fd590244018ef4cf755e50"} Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.539043 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.539454 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.539736 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.540034 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.540757 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.541319 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.541665 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.542298 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.549681 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:20:08 crc kubenswrapper[4660]: I1129 07:20:08.549886 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.482028 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wd4st" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="registry-server" probeResult="failure" output=< Nov 29 07:20:09 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:20:09 crc kubenswrapper[4660]: > Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.598396 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6mtm" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="registry-server" probeResult="failure" output=< Nov 29 07:20:09 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:20:09 crc kubenswrapper[4660]: > Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.695333 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.696854 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.697703 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.698010 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.698212 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.698411 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.698636 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:09 crc kubenswrapper[4660]: I1129 07:20:09.698837 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:14 crc kubenswrapper[4660]: E1129 07:20:14.280697 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" interval="7s" Nov 29 07:20:14 crc kubenswrapper[4660]: I1129 07:20:14.985062 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:20:14 crc kubenswrapper[4660]: I1129 07:20:14.985318 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.023010 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.024087 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.024595 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.025134 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.025472 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.025829 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.026182 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.026529 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.026887 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.377833 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.377875 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.414016 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.414657 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.423217 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.423646 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.423853 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.424030 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.424195 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.424357 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.424519 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.610982 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.611666 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.612152 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.612525 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.612827 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.613111 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.613351 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.613551 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.613726 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.623286 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.623804 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.624158 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.624657 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.624939 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.625318 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.625656 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.625888 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.626165 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.655386 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.656121 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.656556 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.658833 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.659272 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.659804 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.660352 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.660701 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.660912 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.691436 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.692065 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.693118 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.693365 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.693755 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.694142 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.694310 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.694457 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:15 crc kubenswrapper[4660]: I1129 07:20:15.694713 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:16 crc kubenswrapper[4660]: E1129 07:20:16.858170 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-29T07:20:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6dbcf185d7aecc64ad77a55ea28c8b2e46764d0976aa0cb7b7e359d7db7c7d99\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aa3b0a1844156935b9005476446d3a6e00eeaa29c793005658af056bb3739900\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201826122},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:c5557f591ad262ac58cc846c9df3d969fa5f7fd6add59f66382b641fc094b666\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:de047e9528b5f34e1262bafad52130ebf24f80265b580495e6d2618ca9afbbd7\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201183569},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:16 crc kubenswrapper[4660]: E1129 07:20:16.859469 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:16 crc kubenswrapper[4660]: E1129 07:20:16.859773 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:16 crc kubenswrapper[4660]: E1129 07:20:16.859987 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:16 crc kubenswrapper[4660]: E1129 07:20:16.860227 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:16 crc kubenswrapper[4660]: E1129 07:20:16.860335 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 29 07:20:17 crc kubenswrapper[4660]: E1129 07:20:17.316082 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.129.56.165:6443: connect: connection refused" event=< Nov 29 07:20:17 crc kubenswrapper[4660]: &Event{ObjectMeta:{oauth-openshift-558db77b4-8dwgp.187c692c90ed4c68 openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-8dwgp,UID:3069d78e-6be2-46bf-baae-bbe2ccf0b06b,APIVersion:v1,ResourceVersion:27246,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.17:6443/healthz": dial tcp 10.217.0.17:6443: connect: connection refused Nov 29 07:20:17 crc kubenswrapper[4660]: body: Nov 29 07:20:17 crc kubenswrapper[4660]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,LastTimestamp:2025-11-29 07:19:54.153340008 +0000 UTC m=+284.706869927,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Nov 29 07:20:17 crc kubenswrapper[4660]: > Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.423060 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.693200 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.694433 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.695316 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.695934 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.696272 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.696749 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.697294 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.697723 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.698560 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.710978 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.711023 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:17 crc kubenswrapper[4660]: E1129 07:20:17.711644 4660 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.712466 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:17 crc kubenswrapper[4660]: W1129 07:20:17.734990 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e86b99867bcfdec48368b15c182ed672515931fefbec8f0a8aba7ed2607aa760 WatchSource:0}: Error finding container e86b99867bcfdec48368b15c182ed672515931fefbec8f0a8aba7ed2607aa760: Status 404 returned error can't find the container with id e86b99867bcfdec48368b15c182ed672515931fefbec8f0a8aba7ed2607aa760 Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.851487 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.855318 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.855863 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.856180 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.856413 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.856575 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.856799 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.856945 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.857083 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:17 crc kubenswrapper[4660]: I1129 07:20:17.857217 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.483921 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.485216 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.485543 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.485882 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.486218 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.486650 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.487527 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.487817 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.488074 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.523322 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.524244 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.524703 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.525223 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.525521 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.525762 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.526058 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.526466 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.526766 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.590254 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e86b99867bcfdec48368b15c182ed672515931fefbec8f0a8aba7ed2607aa760"} Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.595508 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.595975 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.596294 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.596589 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.596877 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.597187 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.597466 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.597800 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.598134 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.600753 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.601774 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.602019 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.602319 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.602678 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.602948 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.603270 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.603547 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.603824 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.644774 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.645473 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.646806 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.647523 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.647970 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.648317 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.648672 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.649083 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:18 crc kubenswrapper[4660]: I1129 07:20:18.649417 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.598458 4660 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4f5ce94a8458046d2af2cd5abe26d2fadd8e3e5f40c2f46cc3502126aab1b3f0" exitCode=0 Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.598564 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4f5ce94a8458046d2af2cd5abe26d2fadd8e3e5f40c2f46cc3502126aab1b3f0"} Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.598743 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.598764 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:19 crc kubenswrapper[4660]: E1129 07:20:19.599606 4660 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.599682 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.600183 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.600803 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.601381 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.601880 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.602370 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.602745 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.603360 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.697549 4660 status_manager.go:851] "Failed to get status for pod" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.698222 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.698509 4660 status_manager.go:851] "Failed to get status for pod" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" pod="openshift-authentication/oauth-openshift-558db77b4-8dwgp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8dwgp\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.698819 4660 status_manager.go:851] "Failed to get status for pod" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" pod="openshift-marketplace/certified-operators-swwtq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-swwtq\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.699058 4660 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.699298 4660 status_manager.go:851] "Failed to get status for pod" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" pod="openshift-marketplace/redhat-operators-h6mtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6mtm\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.699572 4660 status_manager.go:851] "Failed to get status for pod" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" pod="openshift-marketplace/community-operators-m5w6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-m5w6w\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.699904 4660 status_manager.go:851] "Failed to get status for pod" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" pod="openshift-marketplace/redhat-operators-wd4st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wd4st\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:19 crc kubenswrapper[4660]: I1129 07:20:19.700116 4660 status_manager.go:851] "Failed to get status for pod" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" pod="openshift-marketplace/certified-operators-vw7mz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vw7mz\": dial tcp 38.129.56.165:6443: connect: connection refused" Nov 29 07:20:20 crc kubenswrapper[4660]: I1129 07:20:20.605988 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4d4d220578ac6f34fae681e6796632f355fe3e8ae573f3d9a1ac3a796d3ad036"} Nov 29 07:20:21 crc kubenswrapper[4660]: I1129 07:20:21.621662 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ffa4c065e79babc50679ac9df3538f8a2759ce77471200140820b378e4552b2"} Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.629108 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"499659017f431dd5f4a7a800a9b8ea2e1829b34c540f074f190998760fb02f04"} Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.629346 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"27f1e49e825f10477ca000efc6ff9daa6ea6973322a39a8bf5cc6037c0008bc4"} Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.629359 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8436eca6e1b2094575e72c2452ecca5c478eec48ed7fed6d095f85163f53cf02"} Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.629486 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.629629 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.629649 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.713564 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.713835 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.719197 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]log ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]etcd ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/generic-apiserver-start-informers ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/priority-and-fairness-filter ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-apiextensions-informers ok Nov 29 07:20:22 crc kubenswrapper[4660]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Nov 29 07:20:22 crc kubenswrapper[4660]: [-]poststarthook/crd-informer-synced failed: reason withheld Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-system-namespaces-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 29 07:20:22 crc kubenswrapper[4660]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 29 07:20:22 crc kubenswrapper[4660]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/bootstrap-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/start-kube-aggregator-informers ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/apiservice-registration-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/apiservice-discovery-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]autoregister-completion ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/apiservice-openapi-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 29 07:20:22 crc kubenswrapper[4660]: livez check failed Nov 29 07:20:22 crc kubenswrapper[4660]: I1129 07:20:22.719263 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 29 07:20:27 crc kubenswrapper[4660]: I1129 07:20:27.640012 4660 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:27 crc kubenswrapper[4660]: I1129 07:20:27.718303 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:27 crc kubenswrapper[4660]: I1129 07:20:27.732068 4660 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3fe7fb3c-99c2-447d-8693-15b71e81594a" Nov 29 07:20:28 crc kubenswrapper[4660]: I1129 07:20:28.661100 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:28 crc kubenswrapper[4660]: I1129 07:20:28.661682 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:28 crc kubenswrapper[4660]: I1129 07:20:28.667279 4660 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3fe7fb3c-99c2-447d-8693-15b71e81594a" Nov 29 07:20:28 crc kubenswrapper[4660]: I1129 07:20:28.669130 4660 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://4d4d220578ac6f34fae681e6796632f355fe3e8ae573f3d9a1ac3a796d3ad036" Nov 29 07:20:28 crc kubenswrapper[4660]: I1129 07:20:28.669156 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:29 crc kubenswrapper[4660]: I1129 07:20:29.668366 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:29 crc kubenswrapper[4660]: I1129 07:20:29.668409 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:29 crc kubenswrapper[4660]: I1129 07:20:29.672971 4660 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3fe7fb3c-99c2-447d-8693-15b71e81594a" Nov 29 07:20:30 crc kubenswrapper[4660]: I1129 07:20:30.671905 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:30 crc kubenswrapper[4660]: I1129 07:20:30.671932 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="73398adb-2c45-4f24-9e89-3cc192b80d60" Nov 29 07:20:30 crc kubenswrapper[4660]: I1129 07:20:30.675040 4660 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3fe7fb3c-99c2-447d-8693-15b71e81594a" Nov 29 07:20:50 crc kubenswrapper[4660]: I1129 07:20:50.766420 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 29 07:20:51 crc kubenswrapper[4660]: I1129 07:20:51.383934 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.563087 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.750776 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.967064 4660 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.968016 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m5w6w" podStartSLOduration=79.444197441 podStartE2EDuration="2m57.967995323s" podCreationTimestamp="2025-11-29 07:17:55 +0000 UTC" firstStartedPulling="2025-11-29 07:17:57.340937033 +0000 UTC m=+167.894466932" lastFinishedPulling="2025-11-29 07:19:35.864734915 +0000 UTC m=+266.418264814" observedRunningTime="2025-11-29 07:20:27.758311473 +0000 UTC m=+318.311841402" watchObservedRunningTime="2025-11-29 07:20:52.967995323 +0000 UTC m=+343.521525222" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.968243 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6mtm" podStartSLOduration=54.398954547 podStartE2EDuration="2m54.96823688s" podCreationTimestamp="2025-11-29 07:17:58 +0000 UTC" firstStartedPulling="2025-11-29 07:18:04.522392136 +0000 UTC m=+175.075922035" lastFinishedPulling="2025-11-29 07:20:05.091674469 +0000 UTC m=+295.645204368" observedRunningTime="2025-11-29 07:20:27.744057891 +0000 UTC m=+318.297587780" watchObservedRunningTime="2025-11-29 07:20:52.96823688 +0000 UTC m=+343.521766779" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.968436 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vw7mz" podStartSLOduration=70.918302383 podStartE2EDuration="2m58.968432625s" podCreationTimestamp="2025-11-29 07:17:54 +0000 UTC" firstStartedPulling="2025-11-29 07:17:57.301639429 +0000 UTC m=+167.855169318" lastFinishedPulling="2025-11-29 07:19:45.351769621 +0000 UTC m=+275.905299560" observedRunningTime="2025-11-29 07:20:27.66643142 +0000 UTC m=+318.219961329" watchObservedRunningTime="2025-11-29 07:20:52.968432625 +0000 UTC m=+343.521962524" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.971194 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-swwtq" podStartSLOduration=53.495725767 podStartE2EDuration="2m57.971184424s" podCreationTimestamp="2025-11-29 07:17:55 +0000 UTC" firstStartedPulling="2025-11-29 07:17:57.28022948 +0000 UTC m=+167.833759379" lastFinishedPulling="2025-11-29 07:20:01.755688137 +0000 UTC m=+292.309218036" observedRunningTime="2025-11-29 07:20:27.7302182 +0000 UTC m=+318.283748099" watchObservedRunningTime="2025-11-29 07:20:52.971184424 +0000 UTC m=+343.524714333" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.971759 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wd4st" podStartSLOduration=56.467366594 podStartE2EDuration="2m55.971754431s" podCreationTimestamp="2025-11-29 07:17:57 +0000 UTC" firstStartedPulling="2025-11-29 07:18:05.555973644 +0000 UTC m=+176.109503543" lastFinishedPulling="2025-11-29 07:20:05.060361481 +0000 UTC m=+295.613891380" observedRunningTime="2025-11-29 07:20:27.771406793 +0000 UTC m=+318.324936702" watchObservedRunningTime="2025-11-29 07:20:52.971754431 +0000 UTC m=+343.525284330" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.972583 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-8dwgp"] Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.972660 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.979451 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 29 07:20:52 crc kubenswrapper[4660]: I1129 07:20:52.995205 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.995181742 podStartE2EDuration="25.995181742s" podCreationTimestamp="2025-11-29 07:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:20:52.992173745 +0000 UTC m=+343.545703634" watchObservedRunningTime="2025-11-29 07:20:52.995181742 +0000 UTC m=+343.548711661" Nov 29 07:20:53 crc kubenswrapper[4660]: I1129 07:20:53.029134 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 29 07:20:53 crc kubenswrapper[4660]: I1129 07:20:53.390025 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 29 07:20:53 crc kubenswrapper[4660]: I1129 07:20:53.550411 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 29 07:20:53 crc kubenswrapper[4660]: I1129 07:20:53.703701 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" path="/var/lib/kubelet/pods/3069d78e-6be2-46bf-baae-bbe2ccf0b06b/volumes" Nov 29 07:20:54 crc kubenswrapper[4660]: I1129 07:20:54.412692 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 29 07:20:54 crc kubenswrapper[4660]: I1129 07:20:54.791683 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 29 07:20:55 crc kubenswrapper[4660]: I1129 07:20:55.023047 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:20:55 crc kubenswrapper[4660]: I1129 07:20:55.906469 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 29 07:20:56 crc kubenswrapper[4660]: I1129 07:20:56.522779 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.310564 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.327662 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.541711 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.585832 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.733294 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.825880 4660 generic.go:334] "Generic (PLEG): container finished" podID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerID="5ba20059163302436888816182924c592bbc1e8fa9b9903e8c93c9dc7eed2117" exitCode=0 Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.825967 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" event={"ID":"6a035a3a-155a-4b6e-ac5c-ca7118e1443d","Type":"ContainerDied","Data":"5ba20059163302436888816182924c592bbc1e8fa9b9903e8c93c9dc7eed2117"} Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.826677 4660 scope.go:117] "RemoveContainer" containerID="5ba20059163302436888816182924c592bbc1e8fa9b9903e8c93c9dc7eed2117" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.880252 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 29 07:20:57 crc kubenswrapper[4660]: I1129 07:20:57.909647 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.037212 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.071529 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.268772 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.401744 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.441421 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.669109 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.834249 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-974tz_6a035a3a-155a-4b6e-ac5c-ca7118e1443d/marketplace-operator/1.log" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.834824 4660 generic.go:334] "Generic (PLEG): container finished" podID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerID="dfac803ebcd689a9854064e0e58594db716d7330e1a60277e802ef55e8e48cf0" exitCode=1 Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.834873 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" event={"ID":"6a035a3a-155a-4b6e-ac5c-ca7118e1443d","Type":"ContainerDied","Data":"dfac803ebcd689a9854064e0e58594db716d7330e1a60277e802ef55e8e48cf0"} Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.834936 4660 scope.go:117] "RemoveContainer" containerID="5ba20059163302436888816182924c592bbc1e8fa9b9903e8c93c9dc7eed2117" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.835512 4660 scope.go:117] "RemoveContainer" containerID="dfac803ebcd689a9854064e0e58594db716d7330e1a60277e802ef55e8e48cf0" Nov 29 07:20:58 crc kubenswrapper[4660]: E1129 07:20:58.835864 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-974tz_openshift-marketplace(6a035a3a-155a-4b6e-ac5c-ca7118e1443d)\"" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" Nov 29 07:20:58 crc kubenswrapper[4660]: I1129 07:20:58.954392 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.195255 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.228367 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.251088 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.269456 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.357471 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.526092 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.631104 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.706385 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 29 07:20:59 crc kubenswrapper[4660]: I1129 07:20:59.840381 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-974tz_6a035a3a-155a-4b6e-ac5c-ca7118e1443d/marketplace-operator/1.log" Nov 29 07:21:00 crc kubenswrapper[4660]: I1129 07:21:00.173263 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 29 07:21:00 crc kubenswrapper[4660]: I1129 07:21:00.224936 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 29 07:21:00 crc kubenswrapper[4660]: I1129 07:21:00.310161 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 29 07:21:00 crc kubenswrapper[4660]: I1129 07:21:00.930022 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 29 07:21:01 crc kubenswrapper[4660]: I1129 07:21:01.070081 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 29 07:21:01 crc kubenswrapper[4660]: I1129 07:21:01.253500 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 29 07:21:01 crc kubenswrapper[4660]: I1129 07:21:01.600774 4660 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 29 07:21:01 crc kubenswrapper[4660]: I1129 07:21:01.601088 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://11c671a95cad8101a35d7a3e02bdd89b8d866ba918d977ec38f6eb4b898a9142" gracePeriod=5 Nov 29 07:21:01 crc kubenswrapper[4660]: I1129 07:21:01.763359 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 29 07:21:01 crc kubenswrapper[4660]: I1129 07:21:01.799267 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 29 07:21:01 crc kubenswrapper[4660]: I1129 07:21:01.902905 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.136348 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.221010 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.244367 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.259481 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.300348 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.626995 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.719804 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.723382 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.814178 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.828741 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.949134 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 29 07:21:02 crc kubenswrapper[4660]: I1129 07:21:02.957536 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.092889 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.098214 4660 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.193071 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.233753 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.262436 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.467056 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.596991 4660 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.630877 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.639939 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.695387 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 29 07:21:03 crc kubenswrapper[4660]: I1129 07:21:03.964726 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.093257 4660 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.194680 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.480965 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.496078 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.651711 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.657001 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.787816 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.824380 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.890829 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.938019 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 29 07:21:04 crc kubenswrapper[4660]: I1129 07:21:04.987596 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.019267 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.334918 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.736775 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.779070 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.835210 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.910327 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.910733 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.911777 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 29 07:21:05 crc kubenswrapper[4660]: I1129 07:21:05.911820 4660 scope.go:117] "RemoveContainer" containerID="dfac803ebcd689a9854064e0e58594db716d7330e1a60277e802ef55e8e48cf0" Nov 29 07:21:05 crc kubenswrapper[4660]: E1129 07:21:05.912086 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-974tz_openshift-marketplace(6a035a3a-155a-4b6e-ac5c-ca7118e1443d)\"" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.446367 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.604691 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.675883 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.820003 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.833339 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.877198 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.877261 4660 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="11c671a95cad8101a35d7a3e02bdd89b8d866ba918d977ec38f6eb4b898a9142" exitCode=137 Nov 29 07:21:06 crc kubenswrapper[4660]: I1129 07:21:06.934213 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.007362 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.072949 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.101500 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.177870 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.178329 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.232558 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.269973 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332204 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332274 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332345 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332377 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332410 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332415 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332442 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332471 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332566 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332847 4660 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332870 4660 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332879 4660 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.332888 4660 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.340552 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.433568 4660 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.609150 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.609360 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.611837 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.699938 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.805566 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.888684 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.888754 4660 scope.go:117] "RemoveContainer" containerID="11c671a95cad8101a35d7a3e02bdd89b8d866ba918d977ec38f6eb4b898a9142" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.888858 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.900083 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.920651 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 29 07:21:07 crc kubenswrapper[4660]: I1129 07:21:07.982324 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.140650 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.173862 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.205182 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.291042 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.415191 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.426476 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.444818 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.507908 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.728677 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 29 07:21:08 crc kubenswrapper[4660]: I1129 07:21:08.928336 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.068624 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.197517 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.213409 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.279363 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.359744 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.366856 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.367487 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.592810 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.601976 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.607681 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.649558 4660 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.676284 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 29 07:21:09 crc kubenswrapper[4660]: I1129 07:21:09.825124 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.021024 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.468978 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.524370 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.663332 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.752885 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.889028 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.905980 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 29 07:21:10 crc kubenswrapper[4660]: I1129 07:21:10.928197 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.233048 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.306591 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.314956 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.356685 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.368036 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.479056 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.744910 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 29 07:21:11 crc kubenswrapper[4660]: I1129 07:21:11.769733 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.028811 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.029118 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.104315 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.104489 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.245952 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.327065 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.427156 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.494233 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.507919 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.568575 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.589641 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.755708 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.877837 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.946456 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 29 07:21:12 crc kubenswrapper[4660]: I1129 07:21:12.961133 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 29 07:21:13 crc kubenswrapper[4660]: I1129 07:21:13.058763 4660 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 29 07:21:13 crc kubenswrapper[4660]: I1129 07:21:13.201159 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:21:13 crc kubenswrapper[4660]: I1129 07:21:13.223133 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 29 07:21:13 crc kubenswrapper[4660]: I1129 07:21:13.590870 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 29 07:21:13 crc kubenswrapper[4660]: I1129 07:21:13.668482 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:21:13 crc kubenswrapper[4660]: I1129 07:21:13.751998 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.021327 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.050496 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.259783 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.280150 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.306941 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.449346 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.569513 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.572402 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.628405 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.890680 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.904176 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 29 07:21:14 crc kubenswrapper[4660]: I1129 07:21:14.946774 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.083489 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.173105 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.345478 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.381013 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.392567 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.553084 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.554267 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.684536 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.882558 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.923357 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:21:15 crc kubenswrapper[4660]: I1129 07:21:15.925297 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.095106 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.121339 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.263430 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.280282 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.334278 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.344112 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.504024 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582120 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66548646cd-z7lht"] Nov 29 07:21:16 crc kubenswrapper[4660]: E1129 07:21:16.582360 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" containerName="oauth-openshift" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582373 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" containerName="oauth-openshift" Nov 29 07:21:16 crc kubenswrapper[4660]: E1129 07:21:16.582386 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582392 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:21:16 crc kubenswrapper[4660]: E1129 07:21:16.582405 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" containerName="installer" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582411 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" containerName="installer" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582500 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582515 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3069d78e-6be2-46bf-baae-bbe2ccf0b06b" containerName="oauth-openshift" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582523 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="908e5789-bf04-4249-8e14-2573398bd1c3" containerName="installer" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.582989 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.584917 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.585159 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.585333 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.585652 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.585779 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.586362 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.586461 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.586955 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.587431 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.587595 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.588360 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.589569 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.594503 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.595890 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66548646cd-z7lht"] Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.603907 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.605118 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.619361 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.643409 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.660847 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-audit-policies\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.660925 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-audit-dir\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661068 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661135 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661163 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661235 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661319 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661450 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-session\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661482 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-error\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661568 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661672 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661737 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661816 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pblvq\" (UniqueName: \"kubernetes.io/projected/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-kube-api-access-pblvq\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.661841 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-login\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763040 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-error\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763088 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763116 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763133 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763158 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pblvq\" (UniqueName: \"kubernetes.io/projected/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-kube-api-access-pblvq\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763176 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-login\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763198 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-audit-policies\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763214 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-audit-dir\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763233 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763252 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763269 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763285 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763310 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.763344 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-session\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.764146 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-audit-dir\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.764953 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-audit-policies\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.765457 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.765695 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.766263 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.770476 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.770923 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-session\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.779904 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-error\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.780013 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.780227 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.780549 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.780755 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.780969 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-v4-0-config-user-template-login\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.788049 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pblvq\" (UniqueName: \"kubernetes.io/projected/06c20dae-5bfe-4107-9aa4-faa9ae9d618d-kube-api-access-pblvq\") pod \"oauth-openshift-66548646cd-z7lht\" (UID: \"06c20dae-5bfe-4107-9aa4-faa9ae9d618d\") " pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:16 crc kubenswrapper[4660]: I1129 07:21:16.945784 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.133552 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66548646cd-z7lht"] Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.322267 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.617035 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.706138 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.886959 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.941910 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" event={"ID":"06c20dae-5bfe-4107-9aa4-faa9ae9d618d","Type":"ContainerStarted","Data":"4c5dc53e01e5ade3a7c0f369b9d9bf873785121ade13a23b488bbc93474b28b6"} Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.941966 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" event={"ID":"06c20dae-5bfe-4107-9aa4-faa9ae9d618d","Type":"ContainerStarted","Data":"fa862be111424086f6d8300a079e0890bfcb2ad41504f381b947fb69edcbebf3"} Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.942216 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.949112 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" Nov 29 07:21:17 crc kubenswrapper[4660]: I1129 07:21:17.965683 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66548646cd-z7lht" podStartSLOduration=112.965600986 podStartE2EDuration="1m52.965600986s" podCreationTimestamp="2025-11-29 07:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:17.962740335 +0000 UTC m=+368.516270244" watchObservedRunningTime="2025-11-29 07:21:17.965600986 +0000 UTC m=+368.519130905" Nov 29 07:21:18 crc kubenswrapper[4660]: I1129 07:21:18.144158 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 29 07:21:18 crc kubenswrapper[4660]: I1129 07:21:18.223072 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 29 07:21:18 crc kubenswrapper[4660]: I1129 07:21:18.354909 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 29 07:21:18 crc kubenswrapper[4660]: I1129 07:21:18.511078 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 29 07:21:18 crc kubenswrapper[4660]: I1129 07:21:18.531532 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 29 07:21:18 crc kubenswrapper[4660]: I1129 07:21:18.635742 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.270501 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.362849 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.584119 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.662131 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.695872 4660 scope.go:117] "RemoveContainer" containerID="dfac803ebcd689a9854064e0e58594db716d7330e1a60277e802ef55e8e48cf0" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.785164 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.849854 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.960988 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-974tz_6a035a3a-155a-4b6e-ac5c-ca7118e1443d/marketplace-operator/1.log" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.961272 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" event={"ID":"6a035a3a-155a-4b6e-ac5c-ca7118e1443d","Type":"ContainerStarted","Data":"3a7d48c8a6db46ee09451ea686168bf1cd076e244fe3d67347ea7176637c8de9"} Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.961580 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.962702 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-974tz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 29 07:21:19 crc kubenswrapper[4660]: I1129 07:21:19.962760 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.042737 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.175906 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.211271 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.219320 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.372739 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.481040 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.788513 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.790346 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 29 07:21:20 crc kubenswrapper[4660]: I1129 07:21:20.972557 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:21:21 crc kubenswrapper[4660]: I1129 07:21:21.054371 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 29 07:21:21 crc kubenswrapper[4660]: I1129 07:21:21.447905 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 29 07:21:21 crc kubenswrapper[4660]: I1129 07:21:21.589270 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 29 07:21:21 crc kubenswrapper[4660]: I1129 07:21:21.591542 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 29 07:21:21 crc kubenswrapper[4660]: I1129 07:21:21.662773 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 29 07:21:22 crc kubenswrapper[4660]: I1129 07:21:22.598653 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 29 07:21:23 crc kubenswrapper[4660]: I1129 07:21:23.449862 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 29 07:21:23 crc kubenswrapper[4660]: I1129 07:21:23.505338 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 29 07:21:23 crc kubenswrapper[4660]: I1129 07:21:23.551446 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 29 07:21:23 crc kubenswrapper[4660]: I1129 07:21:23.639287 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 29 07:21:23 crc kubenswrapper[4660]: I1129 07:21:23.751303 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 29 07:21:24 crc kubenswrapper[4660]: I1129 07:21:24.488858 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 29 07:21:24 crc kubenswrapper[4660]: I1129 07:21:24.538698 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 29 07:21:24 crc kubenswrapper[4660]: I1129 07:21:24.617420 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 29 07:21:24 crc kubenswrapper[4660]: I1129 07:21:24.862981 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 29 07:21:25 crc kubenswrapper[4660]: I1129 07:21:25.269024 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 29 07:21:25 crc kubenswrapper[4660]: I1129 07:21:25.302943 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 29 07:21:25 crc kubenswrapper[4660]: I1129 07:21:25.491999 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 29 07:21:25 crc kubenswrapper[4660]: I1129 07:21:25.726850 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 29 07:21:35 crc kubenswrapper[4660]: I1129 07:21:35.499790 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:21:35 crc kubenswrapper[4660]: I1129 07:21:35.500291 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:21:44 crc kubenswrapper[4660]: I1129 07:21:44.791213 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm8tt"] Nov 29 07:21:44 crc kubenswrapper[4660]: I1129 07:21:44.791946 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" podUID="6051c490-f396-4257-a4f8-e0c8a1bcf910" containerName="controller-manager" containerID="cri-o://05ccc0f3dce711c727ee6acba8d0c57b8d0cc002fe99de83ed3fe432c9d8261c" gracePeriod=30 Nov 29 07:21:44 crc kubenswrapper[4660]: I1129 07:21:44.867758 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9"] Nov 29 07:21:44 crc kubenswrapper[4660]: I1129 07:21:44.867996 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" podUID="aacf3710-663f-4cfa-aa89-7bbc848e094d" containerName="route-controller-manager" containerID="cri-o://2adda2a8771cb1f2797ace1973a84247ff9cdf13e1bf7e6039647411e024886a" gracePeriod=30 Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.096810 4660 generic.go:334] "Generic (PLEG): container finished" podID="aacf3710-663f-4cfa-aa89-7bbc848e094d" containerID="2adda2a8771cb1f2797ace1973a84247ff9cdf13e1bf7e6039647411e024886a" exitCode=0 Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.096866 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" event={"ID":"aacf3710-663f-4cfa-aa89-7bbc848e094d","Type":"ContainerDied","Data":"2adda2a8771cb1f2797ace1973a84247ff9cdf13e1bf7e6039647411e024886a"} Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.105386 4660 generic.go:334] "Generic (PLEG): container finished" podID="6051c490-f396-4257-a4f8-e0c8a1bcf910" containerID="05ccc0f3dce711c727ee6acba8d0c57b8d0cc002fe99de83ed3fe432c9d8261c" exitCode=0 Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.105428 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" event={"ID":"6051c490-f396-4257-a4f8-e0c8a1bcf910","Type":"ContainerDied","Data":"05ccc0f3dce711c727ee6acba8d0c57b8d0cc002fe99de83ed3fe432c9d8261c"} Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.161267 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.251804 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfqxt\" (UniqueName: \"kubernetes.io/projected/6051c490-f396-4257-a4f8-e0c8a1bcf910-kube-api-access-bfqxt\") pod \"6051c490-f396-4257-a4f8-e0c8a1bcf910\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.251874 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-client-ca\") pod \"6051c490-f396-4257-a4f8-e0c8a1bcf910\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.251963 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-config\") pod \"6051c490-f396-4257-a4f8-e0c8a1bcf910\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.252869 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-client-ca" (OuterVolumeSpecName: "client-ca") pod "6051c490-f396-4257-a4f8-e0c8a1bcf910" (UID: "6051c490-f396-4257-a4f8-e0c8a1bcf910"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.253056 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-config" (OuterVolumeSpecName: "config") pod "6051c490-f396-4257-a4f8-e0c8a1bcf910" (UID: "6051c490-f396-4257-a4f8-e0c8a1bcf910"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.253130 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-proxy-ca-bundles\") pod \"6051c490-f396-4257-a4f8-e0c8a1bcf910\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.253226 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6051c490-f396-4257-a4f8-e0c8a1bcf910-serving-cert\") pod \"6051c490-f396-4257-a4f8-e0c8a1bcf910\" (UID: \"6051c490-f396-4257-a4f8-e0c8a1bcf910\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.253939 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6051c490-f396-4257-a4f8-e0c8a1bcf910" (UID: "6051c490-f396-4257-a4f8-e0c8a1bcf910"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.254236 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.254259 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.254271 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6051c490-f396-4257-a4f8-e0c8a1bcf910-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.256108 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.257589 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6051c490-f396-4257-a4f8-e0c8a1bcf910-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6051c490-f396-4257-a4f8-e0c8a1bcf910" (UID: "6051c490-f396-4257-a4f8-e0c8a1bcf910"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.260713 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6051c490-f396-4257-a4f8-e0c8a1bcf910-kube-api-access-bfqxt" (OuterVolumeSpecName: "kube-api-access-bfqxt") pod "6051c490-f396-4257-a4f8-e0c8a1bcf910" (UID: "6051c490-f396-4257-a4f8-e0c8a1bcf910"). InnerVolumeSpecName "kube-api-access-bfqxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.354926 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6spf\" (UniqueName: \"kubernetes.io/projected/aacf3710-663f-4cfa-aa89-7bbc848e094d-kube-api-access-s6spf\") pod \"aacf3710-663f-4cfa-aa89-7bbc848e094d\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.354988 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf3710-663f-4cfa-aa89-7bbc848e094d-serving-cert\") pod \"aacf3710-663f-4cfa-aa89-7bbc848e094d\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.355088 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-config\") pod \"aacf3710-663f-4cfa-aa89-7bbc848e094d\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.355160 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-client-ca\") pod \"aacf3710-663f-4cfa-aa89-7bbc848e094d\" (UID: \"aacf3710-663f-4cfa-aa89-7bbc848e094d\") " Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.355340 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfqxt\" (UniqueName: \"kubernetes.io/projected/6051c490-f396-4257-a4f8-e0c8a1bcf910-kube-api-access-bfqxt\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.355351 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6051c490-f396-4257-a4f8-e0c8a1bcf910-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.356219 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-config" (OuterVolumeSpecName: "config") pod "aacf3710-663f-4cfa-aa89-7bbc848e094d" (UID: "aacf3710-663f-4cfa-aa89-7bbc848e094d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.356443 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-client-ca" (OuterVolumeSpecName: "client-ca") pod "aacf3710-663f-4cfa-aa89-7bbc848e094d" (UID: "aacf3710-663f-4cfa-aa89-7bbc848e094d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.358113 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacf3710-663f-4cfa-aa89-7bbc848e094d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aacf3710-663f-4cfa-aa89-7bbc848e094d" (UID: "aacf3710-663f-4cfa-aa89-7bbc848e094d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.358130 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacf3710-663f-4cfa-aa89-7bbc848e094d-kube-api-access-s6spf" (OuterVolumeSpecName: "kube-api-access-s6spf") pod "aacf3710-663f-4cfa-aa89-7bbc848e094d" (UID: "aacf3710-663f-4cfa-aa89-7bbc848e094d"). InnerVolumeSpecName "kube-api-access-s6spf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.456055 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.456087 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aacf3710-663f-4cfa-aa89-7bbc848e094d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.456097 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6spf\" (UniqueName: \"kubernetes.io/projected/aacf3710-663f-4cfa-aa89-7bbc848e094d-kube-api-access-s6spf\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:45 crc kubenswrapper[4660]: I1129 07:21:45.456106 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf3710-663f-4cfa-aa89-7bbc848e094d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.118479 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.118487 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9" event={"ID":"aacf3710-663f-4cfa-aa89-7bbc848e094d","Type":"ContainerDied","Data":"3f7359020ba06b345d58af525068706894c2c81dc19af0a1d012d53d6fde03b4"} Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.118684 4660 scope.go:117] "RemoveContainer" containerID="2adda2a8771cb1f2797ace1973a84247ff9cdf13e1bf7e6039647411e024886a" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.124301 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" event={"ID":"6051c490-f396-4257-a4f8-e0c8a1bcf910","Type":"ContainerDied","Data":"bf05727a23408b9b1deea7c6f9cd991decfe09cd0dcacee49ded3a24ff39d47b"} Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.124440 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm8tt" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.146663 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9"] Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.147542 4660 scope.go:117] "RemoveContainer" containerID="05ccc0f3dce711c727ee6acba8d0c57b8d0cc002fe99de83ed3fe432c9d8261c" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.152498 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x4nv9"] Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.156719 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm8tt"] Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.160115 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm8tt"] Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.760981 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb"] Nov 29 07:21:46 crc kubenswrapper[4660]: E1129 07:21:46.761517 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6051c490-f396-4257-a4f8-e0c8a1bcf910" containerName="controller-manager" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.761531 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6051c490-f396-4257-a4f8-e0c8a1bcf910" containerName="controller-manager" Nov 29 07:21:46 crc kubenswrapper[4660]: E1129 07:21:46.761553 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacf3710-663f-4cfa-aa89-7bbc848e094d" containerName="route-controller-manager" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.761561 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacf3710-663f-4cfa-aa89-7bbc848e094d" containerName="route-controller-manager" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.761698 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6051c490-f396-4257-a4f8-e0c8a1bcf910" containerName="controller-manager" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.761724 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacf3710-663f-4cfa-aa89-7bbc848e094d" containerName="route-controller-manager" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.762274 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.763839 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.764037 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.765369 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.767054 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.767374 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.767561 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.771410 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bd8668598-jtsv7"] Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.772277 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.776223 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.776451 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.776663 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.776815 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.776961 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.777176 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.778864 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd8668598-jtsv7"] Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.785058 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb"] Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.785753 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877580 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-client-ca\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877705 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-config\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877742 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-proxy-ca-bundles\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877777 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbzs\" (UniqueName: \"kubernetes.io/projected/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-kube-api-access-sdbzs\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877809 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-config\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877888 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1c1630-344d-4dcb-a350-462316235c49-serving-cert\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877919 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-client-ca\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.877963 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-serving-cert\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.878011 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtwg\" (UniqueName: \"kubernetes.io/projected/ff1c1630-344d-4dcb-a350-462316235c49-kube-api-access-kgtwg\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979407 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1c1630-344d-4dcb-a350-462316235c49-serving-cert\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979474 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-client-ca\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979527 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-serving-cert\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979581 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgtwg\" (UniqueName: \"kubernetes.io/projected/ff1c1630-344d-4dcb-a350-462316235c49-kube-api-access-kgtwg\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979684 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-client-ca\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979740 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-config\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979802 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-proxy-ca-bundles\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979851 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdbzs\" (UniqueName: \"kubernetes.io/projected/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-kube-api-access-sdbzs\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.979911 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-config\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.983934 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-client-ca\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.984286 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-config\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.984698 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-client-ca\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.984711 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-proxy-ca-bundles\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:46 crc kubenswrapper[4660]: I1129 07:21:46.985148 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-config\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:46.988916 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-serving-cert\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:46.994599 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1c1630-344d-4dcb-a350-462316235c49-serving-cert\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.014734 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgtwg\" (UniqueName: \"kubernetes.io/projected/ff1c1630-344d-4dcb-a350-462316235c49-kube-api-access-kgtwg\") pod \"controller-manager-7bd8668598-jtsv7\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.021596 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdbzs\" (UniqueName: \"kubernetes.io/projected/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-kube-api-access-sdbzs\") pod \"route-controller-manager-64947b58b-ddlzb\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.083566 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.104409 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.549807 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd8668598-jtsv7"] Nov 29 07:21:47 crc kubenswrapper[4660]: W1129 07:21:47.555580 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f3c9e5_09c2_4178_b7ba_eb854b1e4841.slice/crio-c1085f51fcc50ccd4f8c7550c4d2f48bac264f26e9167c4adf35cac9415e2304 WatchSource:0}: Error finding container c1085f51fcc50ccd4f8c7550c4d2f48bac264f26e9167c4adf35cac9415e2304: Status 404 returned error can't find the container with id c1085f51fcc50ccd4f8c7550c4d2f48bac264f26e9167c4adf35cac9415e2304 Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.558106 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb"] Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.700363 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6051c490-f396-4257-a4f8-e0c8a1bcf910" path="/var/lib/kubelet/pods/6051c490-f396-4257-a4f8-e0c8a1bcf910/volumes" Nov 29 07:21:47 crc kubenswrapper[4660]: I1129 07:21:47.701372 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aacf3710-663f-4cfa-aa89-7bbc848e094d" path="/var/lib/kubelet/pods/aacf3710-663f-4cfa-aa89-7bbc848e094d/volumes" Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.143700 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" event={"ID":"ff1c1630-344d-4dcb-a350-462316235c49","Type":"ContainerStarted","Data":"d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415"} Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.143746 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" event={"ID":"ff1c1630-344d-4dcb-a350-462316235c49","Type":"ContainerStarted","Data":"a32628303ddc40e2c4e9284b72e7dfe72eea0003ffb24b8cec2277f20f482172"} Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.143858 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.146246 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" event={"ID":"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841","Type":"ContainerStarted","Data":"36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7"} Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.146278 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" event={"ID":"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841","Type":"ContainerStarted","Data":"c1085f51fcc50ccd4f8c7550c4d2f48bac264f26e9167c4adf35cac9415e2304"} Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.146473 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.149652 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.166545 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" podStartSLOduration=4.166526852 podStartE2EDuration="4.166526852s" podCreationTimestamp="2025-11-29 07:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:48.162101002 +0000 UTC m=+398.715630911" watchObservedRunningTime="2025-11-29 07:21:48.166526852 +0000 UTC m=+398.720056751" Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.181073 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" podStartSLOduration=4.181056998 podStartE2EDuration="4.181056998s" podCreationTimestamp="2025-11-29 07:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:48.178920985 +0000 UTC m=+398.732450904" watchObservedRunningTime="2025-11-29 07:21:48.181056998 +0000 UTC m=+398.734586897" Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.263401 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.653706 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd8668598-jtsv7"] Nov 29 07:21:48 crc kubenswrapper[4660]: I1129 07:21:48.674710 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb"] Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.155269 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" podUID="a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" containerName="route-controller-manager" containerID="cri-o://36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7" gracePeriod=30 Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.155437 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" podUID="ff1c1630-344d-4dcb-a350-462316235c49" containerName="controller-manager" containerID="cri-o://d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415" gracePeriod=30 Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.588164 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.594211 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.617511 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-k4vlg"] Nov 29 07:21:50 crc kubenswrapper[4660]: E1129 07:21:50.617760 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1c1630-344d-4dcb-a350-462316235c49" containerName="controller-manager" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.617771 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1c1630-344d-4dcb-a350-462316235c49" containerName="controller-manager" Nov 29 07:21:50 crc kubenswrapper[4660]: E1129 07:21:50.617792 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" containerName="route-controller-manager" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.617798 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" containerName="route-controller-manager" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.617886 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" containerName="route-controller-manager" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.617895 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1c1630-344d-4dcb-a350-462316235c49" containerName="controller-manager" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.618249 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.631841 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-k4vlg"] Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728551 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgtwg\" (UniqueName: \"kubernetes.io/projected/ff1c1630-344d-4dcb-a350-462316235c49-kube-api-access-kgtwg\") pod \"ff1c1630-344d-4dcb-a350-462316235c49\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728708 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-serving-cert\") pod \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728741 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1c1630-344d-4dcb-a350-462316235c49-serving-cert\") pod \"ff1c1630-344d-4dcb-a350-462316235c49\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728786 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-client-ca\") pod \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728841 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-proxy-ca-bundles\") pod \"ff1c1630-344d-4dcb-a350-462316235c49\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728869 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-client-ca\") pod \"ff1c1630-344d-4dcb-a350-462316235c49\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728900 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdbzs\" (UniqueName: \"kubernetes.io/projected/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-kube-api-access-sdbzs\") pod \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.728997 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-config\") pod \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\" (UID: \"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.729024 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-config\") pod \"ff1c1630-344d-4dcb-a350-462316235c49\" (UID: \"ff1c1630-344d-4dcb-a350-462316235c49\") " Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.729236 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-proxy-ca-bundles\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.729298 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-config\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.729350 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h58d7\" (UniqueName: \"kubernetes.io/projected/32cbca8b-b875-4048-b456-cb5ed1de50a4-kube-api-access-h58d7\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.729476 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cbca8b-b875-4048-b456-cb5ed1de50a4-serving-cert\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.729497 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-client-ca" (OuterVolumeSpecName: "client-ca") pod "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" (UID: "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.729514 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-client-ca\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.730096 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-config" (OuterVolumeSpecName: "config") pod "ff1c1630-344d-4dcb-a350-462316235c49" (UID: "ff1c1630-344d-4dcb-a350-462316235c49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.730133 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff1c1630-344d-4dcb-a350-462316235c49" (UID: "ff1c1630-344d-4dcb-a350-462316235c49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.730165 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ff1c1630-344d-4dcb-a350-462316235c49" (UID: "ff1c1630-344d-4dcb-a350-462316235c49"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.730478 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-config" (OuterVolumeSpecName: "config") pod "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" (UID: "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.734418 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1c1630-344d-4dcb-a350-462316235c49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff1c1630-344d-4dcb-a350-462316235c49" (UID: "ff1c1630-344d-4dcb-a350-462316235c49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.734453 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1c1630-344d-4dcb-a350-462316235c49-kube-api-access-kgtwg" (OuterVolumeSpecName: "kube-api-access-kgtwg") pod "ff1c1630-344d-4dcb-a350-462316235c49" (UID: "ff1c1630-344d-4dcb-a350-462316235c49"). InnerVolumeSpecName "kube-api-access-kgtwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.734464 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" (UID: "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.735076 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-kube-api-access-sdbzs" (OuterVolumeSpecName: "kube-api-access-sdbzs") pod "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" (UID: "a6f3c9e5-09c2-4178-b7ba-eb854b1e4841"). InnerVolumeSpecName "kube-api-access-sdbzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.830752 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-proxy-ca-bundles\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.830904 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-config\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.830990 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h58d7\" (UniqueName: \"kubernetes.io/projected/32cbca8b-b875-4048-b456-cb5ed1de50a4-kube-api-access-h58d7\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831039 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cbca8b-b875-4048-b456-cb5ed1de50a4-serving-cert\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831078 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-client-ca\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831129 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgtwg\" (UniqueName: \"kubernetes.io/projected/ff1c1630-344d-4dcb-a350-462316235c49-kube-api-access-kgtwg\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831142 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831154 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1c1630-344d-4dcb-a350-462316235c49-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831165 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831175 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831186 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831196 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdbzs\" (UniqueName: \"kubernetes.io/projected/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-kube-api-access-sdbzs\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831208 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.831218 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1c1630-344d-4dcb-a350-462316235c49-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.832202 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-client-ca\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.833423 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-config\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.833507 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-proxy-ca-bundles\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.838931 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cbca8b-b875-4048-b456-cb5ed1de50a4-serving-cert\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.854007 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h58d7\" (UniqueName: \"kubernetes.io/projected/32cbca8b-b875-4048-b456-cb5ed1de50a4-kube-api-access-h58d7\") pod \"controller-manager-87448f8dd-k4vlg\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:50 crc kubenswrapper[4660]: I1129 07:21:50.938039 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.161793 4660 generic.go:334] "Generic (PLEG): container finished" podID="a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" containerID="36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7" exitCode=0 Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.161852 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" event={"ID":"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841","Type":"ContainerDied","Data":"36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7"} Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.161877 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" event={"ID":"a6f3c9e5-09c2-4178-b7ba-eb854b1e4841","Type":"ContainerDied","Data":"c1085f51fcc50ccd4f8c7550c4d2f48bac264f26e9167c4adf35cac9415e2304"} Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.161893 4660 scope.go:117] "RemoveContainer" containerID="36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.161991 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.166524 4660 generic.go:334] "Generic (PLEG): container finished" podID="ff1c1630-344d-4dcb-a350-462316235c49" containerID="d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415" exitCode=0 Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.166554 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" event={"ID":"ff1c1630-344d-4dcb-a350-462316235c49","Type":"ContainerDied","Data":"d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415"} Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.166576 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" event={"ID":"ff1c1630-344d-4dcb-a350-462316235c49","Type":"ContainerDied","Data":"a32628303ddc40e2c4e9284b72e7dfe72eea0003ffb24b8cec2277f20f482172"} Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.166644 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd8668598-jtsv7" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.196882 4660 scope.go:117] "RemoveContainer" containerID="36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7" Nov 29 07:21:51 crc kubenswrapper[4660]: E1129 07:21:51.197786 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7\": container with ID starting with 36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7 not found: ID does not exist" containerID="36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.197817 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7"} err="failed to get container status \"36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7\": rpc error: code = NotFound desc = could not find container \"36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7\": container with ID starting with 36e0f22922dd7d9ed268e12639f05282855d1730629e1c4432ec1c789e80adc7 not found: ID does not exist" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.197839 4660 scope.go:117] "RemoveContainer" containerID="d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.198690 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb"] Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.211864 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64947b58b-ddlzb"] Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.214965 4660 scope.go:117] "RemoveContainer" containerID="d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415" Nov 29 07:21:51 crc kubenswrapper[4660]: E1129 07:21:51.215367 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415\": container with ID starting with d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415 not found: ID does not exist" containerID="d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.215398 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415"} err="failed to get container status \"d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415\": rpc error: code = NotFound desc = could not find container \"d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415\": container with ID starting with d221c29da4fdffb92d10b4eb35da8b5d119faf01a68f429295306a0faf913415 not found: ID does not exist" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.215834 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd8668598-jtsv7"] Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.221419 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bd8668598-jtsv7"] Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.353892 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-k4vlg"] Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.701368 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f3c9e5-09c2-4178-b7ba-eb854b1e4841" path="/var/lib/kubelet/pods/a6f3c9e5-09c2-4178-b7ba-eb854b1e4841/volumes" Nov 29 07:21:51 crc kubenswrapper[4660]: I1129 07:21:51.702053 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1c1630-344d-4dcb-a350-462316235c49" path="/var/lib/kubelet/pods/ff1c1630-344d-4dcb-a350-462316235c49/volumes" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.174844 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" event={"ID":"32cbca8b-b875-4048-b456-cb5ed1de50a4","Type":"ContainerStarted","Data":"0bd57616ee18606579d37c6c7bf81da44371da58af5e76df15da3b57e45373ae"} Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.174880 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" event={"ID":"32cbca8b-b875-4048-b456-cb5ed1de50a4","Type":"ContainerStarted","Data":"8b158f516113fc58922ae017f17f040d99c8bd5c31b92a44a88c7b8eac67e6a2"} Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.175112 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.180374 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.205785 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" podStartSLOduration=4.205765984 podStartE2EDuration="4.205765984s" podCreationTimestamp="2025-11-29 07:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:52.200648485 +0000 UTC m=+402.754178404" watchObservedRunningTime="2025-11-29 07:21:52.205765984 +0000 UTC m=+402.759295893" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.762848 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q"] Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.763858 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.765751 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.766229 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.766532 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.766860 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.766999 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.767133 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.783120 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q"] Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.858713 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-config\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.858810 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-client-ca\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.858839 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3824f960-1ca1-4faf-8238-5057131ea0c5-serving-cert\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.858862 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxvc\" (UniqueName: \"kubernetes.io/projected/3824f960-1ca1-4faf-8238-5057131ea0c5-kube-api-access-5zxvc\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.960502 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-client-ca\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.960835 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3824f960-1ca1-4faf-8238-5057131ea0c5-serving-cert\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.960976 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zxvc\" (UniqueName: \"kubernetes.io/projected/3824f960-1ca1-4faf-8238-5057131ea0c5-kube-api-access-5zxvc\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.961120 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-config\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.961487 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-client-ca\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.962395 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-config\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.968352 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3824f960-1ca1-4faf-8238-5057131ea0c5-serving-cert\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:52 crc kubenswrapper[4660]: I1129 07:21:52.977533 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zxvc\" (UniqueName: \"kubernetes.io/projected/3824f960-1ca1-4faf-8238-5057131ea0c5-kube-api-access-5zxvc\") pod \"route-controller-manager-54d4cc6664-m774q\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:53 crc kubenswrapper[4660]: I1129 07:21:53.086221 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:53 crc kubenswrapper[4660]: I1129 07:21:53.481230 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q"] Nov 29 07:21:54 crc kubenswrapper[4660]: I1129 07:21:54.188391 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" event={"ID":"3824f960-1ca1-4faf-8238-5057131ea0c5","Type":"ContainerStarted","Data":"ea2ea2043abc3057d259c33ee9565b00c08cca80dfbd17a4cf3c0d2053482315"} Nov 29 07:21:54 crc kubenswrapper[4660]: I1129 07:21:54.189108 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" event={"ID":"3824f960-1ca1-4faf-8238-5057131ea0c5","Type":"ContainerStarted","Data":"2bacc5ca5a78c38d299c79c77b6220b90c9851f7940f300176ce24de4f6829c0"} Nov 29 07:21:54 crc kubenswrapper[4660]: I1129 07:21:54.189221 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:54 crc kubenswrapper[4660]: I1129 07:21:54.194216 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:54 crc kubenswrapper[4660]: I1129 07:21:54.209346 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" podStartSLOduration=6.209324569 podStartE2EDuration="6.209324569s" podCreationTimestamp="2025-11-29 07:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:21:54.205414095 +0000 UTC m=+404.758943994" watchObservedRunningTime="2025-11-29 07:21:54.209324569 +0000 UTC m=+404.762854468" Nov 29 07:21:57 crc kubenswrapper[4660]: I1129 07:21:57.827929 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-k4vlg"] Nov 29 07:21:57 crc kubenswrapper[4660]: I1129 07:21:57.828479 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" podUID="32cbca8b-b875-4048-b456-cb5ed1de50a4" containerName="controller-manager" containerID="cri-o://0bd57616ee18606579d37c6c7bf81da44371da58af5e76df15da3b57e45373ae" gracePeriod=30 Nov 29 07:21:57 crc kubenswrapper[4660]: I1129 07:21:57.844794 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q"] Nov 29 07:21:57 crc kubenswrapper[4660]: I1129 07:21:57.845479 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" podUID="3824f960-1ca1-4faf-8238-5057131ea0c5" containerName="route-controller-manager" containerID="cri-o://ea2ea2043abc3057d259c33ee9565b00c08cca80dfbd17a4cf3c0d2053482315" gracePeriod=30 Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.225264 4660 generic.go:334] "Generic (PLEG): container finished" podID="3824f960-1ca1-4faf-8238-5057131ea0c5" containerID="ea2ea2043abc3057d259c33ee9565b00c08cca80dfbd17a4cf3c0d2053482315" exitCode=0 Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.225469 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" event={"ID":"3824f960-1ca1-4faf-8238-5057131ea0c5","Type":"ContainerDied","Data":"ea2ea2043abc3057d259c33ee9565b00c08cca80dfbd17a4cf3c0d2053482315"} Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.227674 4660 generic.go:334] "Generic (PLEG): container finished" podID="32cbca8b-b875-4048-b456-cb5ed1de50a4" containerID="0bd57616ee18606579d37c6c7bf81da44371da58af5e76df15da3b57e45373ae" exitCode=0 Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.227719 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" event={"ID":"32cbca8b-b875-4048-b456-cb5ed1de50a4","Type":"ContainerDied","Data":"0bd57616ee18606579d37c6c7bf81da44371da58af5e76df15da3b57e45373ae"} Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.870829 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.897539 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps"] Nov 29 07:21:58 crc kubenswrapper[4660]: E1129 07:21:58.897799 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3824f960-1ca1-4faf-8238-5057131ea0c5" containerName="route-controller-manager" Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.897816 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3824f960-1ca1-4faf-8238-5057131ea0c5" containerName="route-controller-manager" Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.898121 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3824f960-1ca1-4faf-8238-5057131ea0c5" containerName="route-controller-manager" Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.898535 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:58 crc kubenswrapper[4660]: I1129 07:21:58.911137 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps"] Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.040904 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-config\") pod \"3824f960-1ca1-4faf-8238-5057131ea0c5\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041025 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3824f960-1ca1-4faf-8238-5057131ea0c5-serving-cert\") pod \"3824f960-1ca1-4faf-8238-5057131ea0c5\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041052 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zxvc\" (UniqueName: \"kubernetes.io/projected/3824f960-1ca1-4faf-8238-5057131ea0c5-kube-api-access-5zxvc\") pod \"3824f960-1ca1-4faf-8238-5057131ea0c5\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041071 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-client-ca\") pod \"3824f960-1ca1-4faf-8238-5057131ea0c5\" (UID: \"3824f960-1ca1-4faf-8238-5057131ea0c5\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041264 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4j4g\" (UniqueName: \"kubernetes.io/projected/38e424e7-1229-4a2e-9766-850aa93cec06-kube-api-access-l4j4g\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041286 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-config\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041305 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-client-ca\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041323 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e424e7-1229-4a2e-9766-850aa93cec06-serving-cert\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041667 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-config" (OuterVolumeSpecName: "config") pod "3824f960-1ca1-4faf-8238-5057131ea0c5" (UID: "3824f960-1ca1-4faf-8238-5057131ea0c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.041994 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "3824f960-1ca1-4faf-8238-5057131ea0c5" (UID: "3824f960-1ca1-4faf-8238-5057131ea0c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.045784 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3824f960-1ca1-4faf-8238-5057131ea0c5-kube-api-access-5zxvc" (OuterVolumeSpecName: "kube-api-access-5zxvc") pod "3824f960-1ca1-4faf-8238-5057131ea0c5" (UID: "3824f960-1ca1-4faf-8238-5057131ea0c5"). InnerVolumeSpecName "kube-api-access-5zxvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.045914 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3824f960-1ca1-4faf-8238-5057131ea0c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3824f960-1ca1-4faf-8238-5057131ea0c5" (UID: "3824f960-1ca1-4faf-8238-5057131ea0c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.073343 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.142734 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4j4g\" (UniqueName: \"kubernetes.io/projected/38e424e7-1229-4a2e-9766-850aa93cec06-kube-api-access-l4j4g\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.142784 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-config\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.142816 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-client-ca\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.142843 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e424e7-1229-4a2e-9766-850aa93cec06-serving-cert\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.142982 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3824f960-1ca1-4faf-8238-5057131ea0c5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.143145 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zxvc\" (UniqueName: \"kubernetes.io/projected/3824f960-1ca1-4faf-8238-5057131ea0c5-kube-api-access-5zxvc\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.143174 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.143188 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3824f960-1ca1-4faf-8238-5057131ea0c5-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.143740 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-client-ca\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.143877 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-config\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.147218 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e424e7-1229-4a2e-9766-850aa93cec06-serving-cert\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.164366 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4j4g\" (UniqueName: \"kubernetes.io/projected/38e424e7-1229-4a2e-9766-850aa93cec06-kube-api-access-l4j4g\") pod \"route-controller-manager-866f46fcdc-p8bps\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.234337 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.234332 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-87448f8dd-k4vlg" event={"ID":"32cbca8b-b875-4048-b456-cb5ed1de50a4","Type":"ContainerDied","Data":"8b158f516113fc58922ae017f17f040d99c8bd5c31b92a44a88c7b8eac67e6a2"} Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.234465 4660 scope.go:117] "RemoveContainer" containerID="0bd57616ee18606579d37c6c7bf81da44371da58af5e76df15da3b57e45373ae" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.236229 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" event={"ID":"3824f960-1ca1-4faf-8238-5057131ea0c5","Type":"ContainerDied","Data":"2bacc5ca5a78c38d299c79c77b6220b90c9851f7940f300176ce24de4f6829c0"} Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.236312 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.237978 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.244394 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-client-ca\") pod \"32cbca8b-b875-4048-b456-cb5ed1de50a4\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.244511 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-config\") pod \"32cbca8b-b875-4048-b456-cb5ed1de50a4\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.244545 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-proxy-ca-bundles\") pod \"32cbca8b-b875-4048-b456-cb5ed1de50a4\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.244588 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h58d7\" (UniqueName: \"kubernetes.io/projected/32cbca8b-b875-4048-b456-cb5ed1de50a4-kube-api-access-h58d7\") pod \"32cbca8b-b875-4048-b456-cb5ed1de50a4\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.244641 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cbca8b-b875-4048-b456-cb5ed1de50a4-serving-cert\") pod \"32cbca8b-b875-4048-b456-cb5ed1de50a4\" (UID: \"32cbca8b-b875-4048-b456-cb5ed1de50a4\") " Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.245993 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "32cbca8b-b875-4048-b456-cb5ed1de50a4" (UID: "32cbca8b-b875-4048-b456-cb5ed1de50a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.246043 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "32cbca8b-b875-4048-b456-cb5ed1de50a4" (UID: "32cbca8b-b875-4048-b456-cb5ed1de50a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.246194 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-config" (OuterVolumeSpecName: "config") pod "32cbca8b-b875-4048-b456-cb5ed1de50a4" (UID: "32cbca8b-b875-4048-b456-cb5ed1de50a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.248188 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cbca8b-b875-4048-b456-cb5ed1de50a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32cbca8b-b875-4048-b456-cb5ed1de50a4" (UID: "32cbca8b-b875-4048-b456-cb5ed1de50a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.253787 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cbca8b-b875-4048-b456-cb5ed1de50a4-kube-api-access-h58d7" (OuterVolumeSpecName: "kube-api-access-h58d7") pod "32cbca8b-b875-4048-b456-cb5ed1de50a4" (UID: "32cbca8b-b875-4048-b456-cb5ed1de50a4"). InnerVolumeSpecName "kube-api-access-h58d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.258386 4660 scope.go:117] "RemoveContainer" containerID="ea2ea2043abc3057d259c33ee9565b00c08cca80dfbd17a4cf3c0d2053482315" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.266609 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q"] Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.269558 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-m774q"] Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.346697 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.346719 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h58d7\" (UniqueName: \"kubernetes.io/projected/32cbca8b-b875-4048-b456-cb5ed1de50a4-kube-api-access-h58d7\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.346731 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32cbca8b-b875-4048-b456-cb5ed1de50a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.346739 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.346748 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32cbca8b-b875-4048-b456-cb5ed1de50a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.564583 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-k4vlg"] Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.571582 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-k4vlg"] Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.650414 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps"] Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.701308 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cbca8b-b875-4048-b456-cb5ed1de50a4" path="/var/lib/kubelet/pods/32cbca8b-b875-4048-b456-cb5ed1de50a4/volumes" Nov 29 07:21:59 crc kubenswrapper[4660]: I1129 07:21:59.702436 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3824f960-1ca1-4faf-8238-5057131ea0c5" path="/var/lib/kubelet/pods/3824f960-1ca1-4faf-8238-5057131ea0c5/volumes" Nov 29 07:22:00 crc kubenswrapper[4660]: I1129 07:22:00.243299 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" event={"ID":"38e424e7-1229-4a2e-9766-850aa93cec06","Type":"ContainerStarted","Data":"33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7"} Nov 29 07:22:00 crc kubenswrapper[4660]: I1129 07:22:00.243343 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" event={"ID":"38e424e7-1229-4a2e-9766-850aa93cec06","Type":"ContainerStarted","Data":"3d868921e2e52dc5668e7948fd38f95cb4fa8fce2dbaebf58240b3399e32554c"} Nov 29 07:22:00 crc kubenswrapper[4660]: I1129 07:22:00.260991 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" podStartSLOduration=3.260969294 podStartE2EDuration="3.260969294s" podCreationTimestamp="2025-11-29 07:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:00.259293015 +0000 UTC m=+410.812822924" watchObservedRunningTime="2025-11-29 07:22:00.260969294 +0000 UTC m=+410.814499213" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.251594 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.259081 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.770946 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l"] Nov 29 07:22:01 crc kubenswrapper[4660]: E1129 07:22:01.771154 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cbca8b-b875-4048-b456-cb5ed1de50a4" containerName="controller-manager" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.771166 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cbca8b-b875-4048-b456-cb5ed1de50a4" containerName="controller-manager" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.771259 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cbca8b-b875-4048-b456-cb5ed1de50a4" containerName="controller-manager" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.771630 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.774495 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.774596 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.774718 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.775064 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.777803 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.779378 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.783982 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.789046 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l"] Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.876800 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-serving-cert\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.876874 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-client-ca\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.876937 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8qgn\" (UniqueName: \"kubernetes.io/projected/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-kube-api-access-q8qgn\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.876962 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-proxy-ca-bundles\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.876989 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-config\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.978145 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8qgn\" (UniqueName: \"kubernetes.io/projected/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-kube-api-access-q8qgn\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.978207 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-proxy-ca-bundles\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.978240 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-config\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.978315 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-serving-cert\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.978358 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-client-ca\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.979531 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-client-ca\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.979946 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-config\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.980035 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-proxy-ca-bundles\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.984407 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-serving-cert\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:01 crc kubenswrapper[4660]: I1129 07:22:01.998565 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8qgn\" (UniqueName: \"kubernetes.io/projected/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-kube-api-access-q8qgn\") pod \"controller-manager-9b9b64d5f-wmx9l\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:02 crc kubenswrapper[4660]: I1129 07:22:02.092995 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:02 crc kubenswrapper[4660]: I1129 07:22:02.287499 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l"] Nov 29 07:22:02 crc kubenswrapper[4660]: W1129 07:22:02.299822 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc56cb5_2e1d_47b4_8dcc_b308bf4304a4.slice/crio-1da2fcef8e6e229683805d7409095f4ecfdb8c177ecfc680864357f319fceafd WatchSource:0}: Error finding container 1da2fcef8e6e229683805d7409095f4ecfdb8c177ecfc680864357f319fceafd: Status 404 returned error can't find the container with id 1da2fcef8e6e229683805d7409095f4ecfdb8c177ecfc680864357f319fceafd Nov 29 07:22:03 crc kubenswrapper[4660]: I1129 07:22:03.265853 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" event={"ID":"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4","Type":"ContainerStarted","Data":"70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2"} Nov 29 07:22:03 crc kubenswrapper[4660]: I1129 07:22:03.266176 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:03 crc kubenswrapper[4660]: I1129 07:22:03.266326 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" event={"ID":"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4","Type":"ContainerStarted","Data":"1da2fcef8e6e229683805d7409095f4ecfdb8c177ecfc680864357f319fceafd"} Nov 29 07:22:03 crc kubenswrapper[4660]: I1129 07:22:03.270315 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:03 crc kubenswrapper[4660]: I1129 07:22:03.301573 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" podStartSLOduration=6.301551685 podStartE2EDuration="6.301551685s" podCreationTimestamp="2025-11-29 07:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:03.281459547 +0000 UTC m=+413.834989446" watchObservedRunningTime="2025-11-29 07:22:03.301551685 +0000 UTC m=+413.855081594" Nov 29 07:22:03 crc kubenswrapper[4660]: I1129 07:22:03.729528 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l"] Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.277603 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" podUID="4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" containerName="controller-manager" containerID="cri-o://70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2" gracePeriod=30 Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.500257 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.500684 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.759228 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.788123 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-jmcv2"] Nov 29 07:22:05 crc kubenswrapper[4660]: E1129 07:22:05.788402 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" containerName="controller-manager" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.788423 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" containerName="controller-manager" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.788561 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" containerName="controller-manager" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.789065 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.828432 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-client-ca\") pod \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.828551 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-config\") pod \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.828589 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-proxy-ca-bundles\") pod \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.828687 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8qgn\" (UniqueName: \"kubernetes.io/projected/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-kube-api-access-q8qgn\") pod \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.828724 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-serving-cert\") pod \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\" (UID: \"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4\") " Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.830711 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" (UID: "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.830992 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" (UID: "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.831519 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-config" (OuterVolumeSpecName: "config") pod "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" (UID: "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.840576 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-kube-api-access-q8qgn" (OuterVolumeSpecName: "kube-api-access-q8qgn") pod "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" (UID: "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4"). InnerVolumeSpecName "kube-api-access-q8qgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.840727 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" (UID: "4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.895693 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-jmcv2"] Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.930845 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-proxy-ca-bundles\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.930934 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-config\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.930959 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2wz\" (UniqueName: \"kubernetes.io/projected/d141cf8b-4233-4df4-814e-6af58e01bebd-kube-api-access-mc2wz\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.930987 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d141cf8b-4233-4df4-814e-6af58e01bebd-serving-cert\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.931033 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-client-ca\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.931081 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.931094 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.931105 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8qgn\" (UniqueName: \"kubernetes.io/projected/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-kube-api-access-q8qgn\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.931119 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:05 crc kubenswrapper[4660]: I1129 07:22:05.931130 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.032815 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-client-ca\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.032897 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-proxy-ca-bundles\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.032942 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-config\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.032964 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc2wz\" (UniqueName: \"kubernetes.io/projected/d141cf8b-4233-4df4-814e-6af58e01bebd-kube-api-access-mc2wz\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.032985 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d141cf8b-4233-4df4-814e-6af58e01bebd-serving-cert\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.034132 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-client-ca\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.034913 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-config\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.035155 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d141cf8b-4233-4df4-814e-6af58e01bebd-proxy-ca-bundles\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.037840 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d141cf8b-4233-4df4-814e-6af58e01bebd-serving-cert\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.048101 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc2wz\" (UniqueName: \"kubernetes.io/projected/d141cf8b-4233-4df4-814e-6af58e01bebd-kube-api-access-mc2wz\") pod \"controller-manager-87448f8dd-jmcv2\" (UID: \"d141cf8b-4233-4df4-814e-6af58e01bebd\") " pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.109249 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.285079 4660 generic.go:334] "Generic (PLEG): container finished" podID="4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" containerID="70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2" exitCode=0 Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.285190 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" event={"ID":"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4","Type":"ContainerDied","Data":"70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2"} Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.285263 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.285598 4660 scope.go:117] "RemoveContainer" containerID="70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.285576 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l" event={"ID":"4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4","Type":"ContainerDied","Data":"1da2fcef8e6e229683805d7409095f4ecfdb8c177ecfc680864357f319fceafd"} Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.313595 4660 scope.go:117] "RemoveContainer" containerID="70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2" Nov 29 07:22:06 crc kubenswrapper[4660]: E1129 07:22:06.316742 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2\": container with ID starting with 70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2 not found: ID does not exist" containerID="70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.316769 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2"} err="failed to get container status \"70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2\": rpc error: code = NotFound desc = could not find container \"70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2\": container with ID starting with 70720f4b7a52711fd759e4e9a4d5f2c478c6789667d834d495c3d836993f89e2 not found: ID does not exist" Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.325457 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l"] Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.336819 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9b9b64d5f-wmx9l"] Nov 29 07:22:06 crc kubenswrapper[4660]: W1129 07:22:06.353236 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd141cf8b_4233_4df4_814e_6af58e01bebd.slice/crio-bfedbd19ab3e478bf23215c514a5c505651c9dc7a5cad21ea88c9746b32898b4 WatchSource:0}: Error finding container bfedbd19ab3e478bf23215c514a5c505651c9dc7a5cad21ea88c9746b32898b4: Status 404 returned error can't find the container with id bfedbd19ab3e478bf23215c514a5c505651c9dc7a5cad21ea88c9746b32898b4 Nov 29 07:22:06 crc kubenswrapper[4660]: I1129 07:22:06.354406 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-87448f8dd-jmcv2"] Nov 29 07:22:07 crc kubenswrapper[4660]: I1129 07:22:07.291528 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" event={"ID":"d141cf8b-4233-4df4-814e-6af58e01bebd","Type":"ContainerStarted","Data":"dd32dc8d4979311eb5fb768d5a1a15ff314dfbc0ccbfaf744ce36684394b6674"} Nov 29 07:22:07 crc kubenswrapper[4660]: I1129 07:22:07.291892 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:07 crc kubenswrapper[4660]: I1129 07:22:07.291905 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" event={"ID":"d141cf8b-4233-4df4-814e-6af58e01bebd","Type":"ContainerStarted","Data":"bfedbd19ab3e478bf23215c514a5c505651c9dc7a5cad21ea88c9746b32898b4"} Nov 29 07:22:07 crc kubenswrapper[4660]: I1129 07:22:07.344167 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" Nov 29 07:22:07 crc kubenswrapper[4660]: I1129 07:22:07.366453 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-87448f8dd-jmcv2" podStartSLOduration=4.366438238 podStartE2EDuration="4.366438238s" podCreationTimestamp="2025-11-29 07:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:07.320600496 +0000 UTC m=+417.874130405" watchObservedRunningTime="2025-11-29 07:22:07.366438238 +0000 UTC m=+417.919968137" Nov 29 07:22:07 crc kubenswrapper[4660]: I1129 07:22:07.701597 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4" path="/var/lib/kubelet/pods/4dc56cb5-2e1d-47b4-8dcc-b308bf4304a4/volumes" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.001938 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swwtq"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.003963 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-swwtq" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="registry-server" containerID="cri-o://336741f347ff9706ab357fa16ae49a7b574baf5667b22606fe6fe6e71141afec" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.010493 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vw7mz"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.010802 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vw7mz" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="registry-server" containerID="cri-o://19f318bddb3ab862bb442b59e0ca1e3261347e9f441e9fb645d7cf810b52efa3" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.022673 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m5w6w"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.022909 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m5w6w" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="registry-server" containerID="cri-o://ca6b3a498863d451a26ad43b9417b7ccafaa8d60fdee5caf31aca2d1ce4c9eb4" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.043411 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mdtvz"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.043773 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mdtvz" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="registry-server" containerID="cri-o://8dedc36190a8daa62a9548e928c1fcaee53941d1017cc07debcd67e094c5b977" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.048705 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-974tz"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.049012 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" containerID="cri-o://3a7d48c8a6db46ee09451ea686168bf1cd076e244fe3d67347ea7176637c8de9" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.054485 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tk9c4"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.054734 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tk9c4" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="registry-server" containerID="cri-o://b6682ab2c45b4946e69acc5808cc22317776167390825b0f42ce9102c13bfd3d" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.064789 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6mtm"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.066004 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6mtm" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="registry-server" containerID="cri-o://e661d2fd77aeb8c999a075d9343c1d1ab5a02cf343f953d4900435154b17dcc2" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.076802 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4msqn"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.077426 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.086449 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wd4st"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.086752 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wd4st" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="registry-server" containerID="cri-o://4fab74a06b7cc6834c7fa30ac94fa28a94ca73eb4c3d1daf55ef28b384c16121" gracePeriod=30 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.094437 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4msqn"] Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.184715 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9482d0d-cad1-43a2-a0f9-523323125ae2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.184762 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj5pg\" (UniqueName: \"kubernetes.io/projected/f9482d0d-cad1-43a2-a0f9-523323125ae2-kube-api-access-kj5pg\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.184848 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9482d0d-cad1-43a2-a0f9-523323125ae2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.286991 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9482d0d-cad1-43a2-a0f9-523323125ae2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.287054 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj5pg\" (UniqueName: \"kubernetes.io/projected/f9482d0d-cad1-43a2-a0f9-523323125ae2-kube-api-access-kj5pg\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.287102 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9482d0d-cad1-43a2-a0f9-523323125ae2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.290256 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9482d0d-cad1-43a2-a0f9-523323125ae2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.296226 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9482d0d-cad1-43a2-a0f9-523323125ae2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.314564 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj5pg\" (UniqueName: \"kubernetes.io/projected/f9482d0d-cad1-43a2-a0f9-523323125ae2-kube-api-access-kj5pg\") pod \"marketplace-operator-79b997595-4msqn\" (UID: \"f9482d0d-cad1-43a2-a0f9-523323125ae2\") " pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.315711 4660 generic.go:334] "Generic (PLEG): container finished" podID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerID="b6682ab2c45b4946e69acc5808cc22317776167390825b0f42ce9102c13bfd3d" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.315767 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tk9c4" event={"ID":"0787f5de-a9f4-435c-8553-fcb080d3950b","Type":"ContainerDied","Data":"b6682ab2c45b4946e69acc5808cc22317776167390825b0f42ce9102c13bfd3d"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.317307 4660 generic.go:334] "Generic (PLEG): container finished" podID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerID="19f318bddb3ab862bb442b59e0ca1e3261347e9f441e9fb645d7cf810b52efa3" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.317583 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vw7mz" event={"ID":"2071aaa8-38a7-47d8-bf67-b3862af09221","Type":"ContainerDied","Data":"19f318bddb3ab862bb442b59e0ca1e3261347e9f441e9fb645d7cf810b52efa3"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.322506 4660 generic.go:334] "Generic (PLEG): container finished" podID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerID="336741f347ff9706ab357fa16ae49a7b574baf5667b22606fe6fe6e71141afec" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.322579 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swwtq" event={"ID":"07c2303f-89f5-4280-8830-05e28e5a1d96","Type":"ContainerDied","Data":"336741f347ff9706ab357fa16ae49a7b574baf5667b22606fe6fe6e71141afec"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.326873 4660 generic.go:334] "Generic (PLEG): container finished" podID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerID="ca6b3a498863d451a26ad43b9417b7ccafaa8d60fdee5caf31aca2d1ce4c9eb4" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.326979 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5w6w" event={"ID":"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2","Type":"ContainerDied","Data":"ca6b3a498863d451a26ad43b9417b7ccafaa8d60fdee5caf31aca2d1ce4c9eb4"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.334597 4660 generic.go:334] "Generic (PLEG): container finished" podID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerID="e661d2fd77aeb8c999a075d9343c1d1ab5a02cf343f953d4900435154b17dcc2" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.334668 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6mtm" event={"ID":"a157b019-1b17-4d7e-8a47-868b3d24496f","Type":"ContainerDied","Data":"e661d2fd77aeb8c999a075d9343c1d1ab5a02cf343f953d4900435154b17dcc2"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.348834 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-974tz_6a035a3a-155a-4b6e-ac5c-ca7118e1443d/marketplace-operator/1.log" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.348887 4660 generic.go:334] "Generic (PLEG): container finished" podID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerID="3a7d48c8a6db46ee09451ea686168bf1cd076e244fe3d67347ea7176637c8de9" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.348952 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" event={"ID":"6a035a3a-155a-4b6e-ac5c-ca7118e1443d","Type":"ContainerDied","Data":"3a7d48c8a6db46ee09451ea686168bf1cd076e244fe3d67347ea7176637c8de9"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.348994 4660 scope.go:117] "RemoveContainer" containerID="dfac803ebcd689a9854064e0e58594db716d7330e1a60277e802ef55e8e48cf0" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.351138 4660 generic.go:334] "Generic (PLEG): container finished" podID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerID="8dedc36190a8daa62a9548e928c1fcaee53941d1017cc07debcd67e094c5b977" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.351314 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdtvz" event={"ID":"b38d4bdc-266e-423b-89e8-4bea085d5ce7","Type":"ContainerDied","Data":"8dedc36190a8daa62a9548e928c1fcaee53941d1017cc07debcd67e094c5b977"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.356896 4660 generic.go:334] "Generic (PLEG): container finished" podID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerID="4fab74a06b7cc6834c7fa30ac94fa28a94ca73eb4c3d1daf55ef28b384c16121" exitCode=0 Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.357633 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd4st" event={"ID":"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf","Type":"ContainerDied","Data":"4fab74a06b7cc6834c7fa30ac94fa28a94ca73eb4c3d1daf55ef28b384c16121"} Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.505498 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.599258 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.692220 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-utilities\") pod \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.692306 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-catalog-content\") pod \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.692378 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7tpl\" (UniqueName: \"kubernetes.io/projected/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-kube-api-access-t7tpl\") pod \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\" (UID: \"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2\") " Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.694206 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-utilities" (OuterVolumeSpecName: "utilities") pod "3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" (UID: "3d455272-6d6e-4fa8-8a59-60ddcaf10ab2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.699016 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-kube-api-access-t7tpl" (OuterVolumeSpecName: "kube-api-access-t7tpl") pod "3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" (UID: "3d455272-6d6e-4fa8-8a59-60ddcaf10ab2"). InnerVolumeSpecName "kube-api-access-t7tpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.782561 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.798534 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.798838 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7tpl\" (UniqueName: \"kubernetes.io/projected/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-kube-api-access-t7tpl\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.861222 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.872795 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.874976 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.886939 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" (UID: "3d455272-6d6e-4fa8-8a59-60ddcaf10ab2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.887473 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.900186 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkv59\" (UniqueName: \"kubernetes.io/projected/b38d4bdc-266e-423b-89e8-4bea085d5ce7-kube-api-access-hkv59\") pod \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.900315 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-utilities\") pod \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.900432 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-catalog-content\") pod \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\" (UID: \"b38d4bdc-266e-423b-89e8-4bea085d5ce7\") " Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.900809 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.902188 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-utilities" (OuterVolumeSpecName: "utilities") pod "b38d4bdc-266e-423b-89e8-4bea085d5ce7" (UID: "b38d4bdc-266e-423b-89e8-4bea085d5ce7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.907431 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38d4bdc-266e-423b-89e8-4bea085d5ce7-kube-api-access-hkv59" (OuterVolumeSpecName: "kube-api-access-hkv59") pod "b38d4bdc-266e-423b-89e8-4bea085d5ce7" (UID: "b38d4bdc-266e-423b-89e8-4bea085d5ce7"). InnerVolumeSpecName "kube-api-access-hkv59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:09 crc kubenswrapper[4660]: I1129 07:22:09.986141 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b38d4bdc-266e-423b-89e8-4bea085d5ce7" (UID: "b38d4bdc-266e-423b-89e8-4bea085d5ce7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001275 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-catalog-content\") pod \"07c2303f-89f5-4280-8830-05e28e5a1d96\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001321 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxjr\" (UniqueName: \"kubernetes.io/projected/07c2303f-89f5-4280-8830-05e28e5a1d96-kube-api-access-rnxjr\") pod \"07c2303f-89f5-4280-8830-05e28e5a1d96\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001368 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-operator-metrics\") pod \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001388 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-utilities\") pod \"2071aaa8-38a7-47d8-bf67-b3862af09221\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001436 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-catalog-content\") pod \"2071aaa8-38a7-47d8-bf67-b3862af09221\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001459 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-trusted-ca\") pod \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001491 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-utilities\") pod \"07c2303f-89f5-4280-8830-05e28e5a1d96\" (UID: \"07c2303f-89f5-4280-8830-05e28e5a1d96\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001512 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-utilities\") pod \"0787f5de-a9f4-435c-8553-fcb080d3950b\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001538 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4qpx\" (UniqueName: \"kubernetes.io/projected/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-kube-api-access-k4qpx\") pod \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\" (UID: \"6a035a3a-155a-4b6e-ac5c-ca7118e1443d\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001555 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr596\" (UniqueName: \"kubernetes.io/projected/2071aaa8-38a7-47d8-bf67-b3862af09221-kube-api-access-jr596\") pod \"2071aaa8-38a7-47d8-bf67-b3862af09221\" (UID: \"2071aaa8-38a7-47d8-bf67-b3862af09221\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001583 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z49zb\" (UniqueName: \"kubernetes.io/projected/0787f5de-a9f4-435c-8553-fcb080d3950b-kube-api-access-z49zb\") pod \"0787f5de-a9f4-435c-8553-fcb080d3950b\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001605 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-catalog-content\") pod \"0787f5de-a9f4-435c-8553-fcb080d3950b\" (UID: \"0787f5de-a9f4-435c-8553-fcb080d3950b\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001824 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001836 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b38d4bdc-266e-423b-89e8-4bea085d5ce7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.001846 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkv59\" (UniqueName: \"kubernetes.io/projected/b38d4bdc-266e-423b-89e8-4bea085d5ce7-kube-api-access-hkv59\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.003997 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "6a035a3a-155a-4b6e-ac5c-ca7118e1443d" (UID: "6a035a3a-155a-4b6e-ac5c-ca7118e1443d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.005819 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-utilities" (OuterVolumeSpecName: "utilities") pod "2071aaa8-38a7-47d8-bf67-b3862af09221" (UID: "2071aaa8-38a7-47d8-bf67-b3862af09221"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.008098 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "6a035a3a-155a-4b6e-ac5c-ca7118e1443d" (UID: "6a035a3a-155a-4b6e-ac5c-ca7118e1443d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.016141 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-kube-api-access-k4qpx" (OuterVolumeSpecName: "kube-api-access-k4qpx") pod "6a035a3a-155a-4b6e-ac5c-ca7118e1443d" (UID: "6a035a3a-155a-4b6e-ac5c-ca7118e1443d"). InnerVolumeSpecName "kube-api-access-k4qpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.021514 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-utilities" (OuterVolumeSpecName: "utilities") pod "07c2303f-89f5-4280-8830-05e28e5a1d96" (UID: "07c2303f-89f5-4280-8830-05e28e5a1d96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.023863 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-utilities" (OuterVolumeSpecName: "utilities") pod "0787f5de-a9f4-435c-8553-fcb080d3950b" (UID: "0787f5de-a9f4-435c-8553-fcb080d3950b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.050067 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0787f5de-a9f4-435c-8553-fcb080d3950b-kube-api-access-z49zb" (OuterVolumeSpecName: "kube-api-access-z49zb") pod "0787f5de-a9f4-435c-8553-fcb080d3950b" (UID: "0787f5de-a9f4-435c-8553-fcb080d3950b"). InnerVolumeSpecName "kube-api-access-z49zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.056313 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0787f5de-a9f4-435c-8553-fcb080d3950b" (UID: "0787f5de-a9f4-435c-8553-fcb080d3950b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.068583 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2071aaa8-38a7-47d8-bf67-b3862af09221-kube-api-access-jr596" (OuterVolumeSpecName: "kube-api-access-jr596") pod "2071aaa8-38a7-47d8-bf67-b3862af09221" (UID: "2071aaa8-38a7-47d8-bf67-b3862af09221"). InnerVolumeSpecName "kube-api-access-jr596". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.085229 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.090119 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c2303f-89f5-4280-8830-05e28e5a1d96-kube-api-access-rnxjr" (OuterVolumeSpecName: "kube-api-access-rnxjr") pod "07c2303f-89f5-4280-8830-05e28e5a1d96" (UID: "07c2303f-89f5-4280-8830-05e28e5a1d96"). InnerVolumeSpecName "kube-api-access-rnxjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.110450 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c2303f-89f5-4280-8830-05e28e5a1d96" (UID: "07c2303f-89f5-4280-8830-05e28e5a1d96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115304 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115331 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115347 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxjr\" (UniqueName: \"kubernetes.io/projected/07c2303f-89f5-4280-8830-05e28e5a1d96-kube-api-access-rnxjr\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115363 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115376 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115387 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115401 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2303f-89f5-4280-8830-05e28e5a1d96-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115411 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0787f5de-a9f4-435c-8553-fcb080d3950b-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115421 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4qpx\" (UniqueName: \"kubernetes.io/projected/6a035a3a-155a-4b6e-ac5c-ca7118e1443d-kube-api-access-k4qpx\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115432 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr596\" (UniqueName: \"kubernetes.io/projected/2071aaa8-38a7-47d8-bf67-b3862af09221-kube-api-access-jr596\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.115443 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z49zb\" (UniqueName: \"kubernetes.io/projected/0787f5de-a9f4-435c-8553-fcb080d3950b-kube-api-access-z49zb\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.130415 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.145426 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2071aaa8-38a7-47d8-bf67-b3862af09221" (UID: "2071aaa8-38a7-47d8-bf67-b3862af09221"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.216435 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tn5f\" (UniqueName: \"kubernetes.io/projected/a157b019-1b17-4d7e-8a47-868b3d24496f-kube-api-access-9tn5f\") pod \"a157b019-1b17-4d7e-8a47-868b3d24496f\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.216473 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrt8m\" (UniqueName: \"kubernetes.io/projected/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-kube-api-access-mrt8m\") pod \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.216493 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-catalog-content\") pod \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.216561 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-utilities\") pod \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\" (UID: \"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.216592 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-catalog-content\") pod \"a157b019-1b17-4d7e-8a47-868b3d24496f\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.216647 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-utilities\") pod \"a157b019-1b17-4d7e-8a47-868b3d24496f\" (UID: \"a157b019-1b17-4d7e-8a47-868b3d24496f\") " Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.216853 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2071aaa8-38a7-47d8-bf67-b3862af09221-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.217701 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-utilities" (OuterVolumeSpecName: "utilities") pod "a157b019-1b17-4d7e-8a47-868b3d24496f" (UID: "a157b019-1b17-4d7e-8a47-868b3d24496f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.218666 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-utilities" (OuterVolumeSpecName: "utilities") pod "c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" (UID: "c165ea6a-e592-4d7f-b35c-314fd0bf1cbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.244669 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6mtm"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.256293 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a157b019-1b17-4d7e-8a47-868b3d24496f-kube-api-access-9tn5f" (OuterVolumeSpecName: "kube-api-access-9tn5f") pod "a157b019-1b17-4d7e-8a47-868b3d24496f" (UID: "a157b019-1b17-4d7e-8a47-868b3d24496f"). InnerVolumeSpecName "kube-api-access-9tn5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.260019 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-kube-api-access-mrt8m" (OuterVolumeSpecName: "kube-api-access-mrt8m") pod "c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" (UID: "c165ea6a-e592-4d7f-b35c-314fd0bf1cbf"). InnerVolumeSpecName "kube-api-access-mrt8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.317966 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tn5f\" (UniqueName: \"kubernetes.io/projected/a157b019-1b17-4d7e-8a47-868b3d24496f-kube-api-access-9tn5f\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.317992 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrt8m\" (UniqueName: \"kubernetes.io/projected/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-kube-api-access-mrt8m\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.318002 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.318010 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.364430 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" (UID: "c165ea6a-e592-4d7f-b35c-314fd0bf1cbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.373358 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swwtq" event={"ID":"07c2303f-89f5-4280-8830-05e28e5a1d96","Type":"ContainerDied","Data":"aa438f0fa92fe24d5528ff5cc54149115207a9f83cafe24ae59fc918856f6f54"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.373405 4660 scope.go:117] "RemoveContainer" containerID="336741f347ff9706ab357fa16ae49a7b574baf5667b22606fe6fe6e71141afec" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.373834 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swwtq" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.376215 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6mtm" event={"ID":"a157b019-1b17-4d7e-8a47-868b3d24496f","Type":"ContainerDied","Data":"a413cbc9782276f3f49196eb00c1b7ff21f1ff6b6a6ba1d80d47c84a8e74102a"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.376303 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6mtm" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.378764 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4msqn"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.385435 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tk9c4" event={"ID":"0787f5de-a9f4-435c-8553-fcb080d3950b","Type":"ContainerDied","Data":"42d180d9d1ca2d959e21a52c28a7036d9382d1b9dbb9a00190482e77628598b7"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.385810 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tk9c4" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.394147 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.394760 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-974tz" event={"ID":"6a035a3a-155a-4b6e-ac5c-ca7118e1443d","Type":"ContainerDied","Data":"76a3c21f6b0fe7e96fff7b1d69e05c705f9d889ea4cb1d6e811f0763be419460"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.396372 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd4st" event={"ID":"c165ea6a-e592-4d7f-b35c-314fd0bf1cbf","Type":"ContainerDied","Data":"4b9ad9ab1ae28a00e11286629af3117fc40e792b365b12b7be18d61ba9651ee8"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.396534 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd4st" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.400735 4660 scope.go:117] "RemoveContainer" containerID="3e74eb081deec91621ee3046be952b14689a15b434e7458c093f1a45355f5232" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.419478 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.435078 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a157b019-1b17-4d7e-8a47-868b3d24496f" (UID: "a157b019-1b17-4d7e-8a47-868b3d24496f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.448013 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vw7mz" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.449503 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vw7mz" event={"ID":"2071aaa8-38a7-47d8-bf67-b3862af09221","Type":"ContainerDied","Data":"76918e7d0c4f33840bec9da79ddccc8b36e0281b33e9cb8d4d8621436f5372ea"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.454367 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5w6w" event={"ID":"3d455272-6d6e-4fa8-8a59-60ddcaf10ab2","Type":"ContainerDied","Data":"faaa8e9e3cd18c06f7fe65c2f599eafa5145b3500f5afb115ebe31a366090650"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.454500 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5w6w" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.455492 4660 scope.go:117] "RemoveContainer" containerID="a492bbd911e16635c86b02bf1fc654b76a49496bb777bd76e45edc1cea13e6f9" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.463750 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdtvz" event={"ID":"b38d4bdc-266e-423b-89e8-4bea085d5ce7","Type":"ContainerDied","Data":"42d78aea31b00a4f69fefb990a5e66fe67130914d47a890a55e3d711f7982e4e"} Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.463863 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdtvz" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.494302 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tk9c4"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.498097 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tk9c4"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.511184 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swwtq"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.513328 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-swwtq"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.520165 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a157b019-1b17-4d7e-8a47-868b3d24496f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.524510 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-974tz"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.533305 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-974tz"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.535312 4660 scope.go:117] "RemoveContainer" containerID="e661d2fd77aeb8c999a075d9343c1d1ab5a02cf343f953d4900435154b17dcc2" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.547280 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wd4st"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.551994 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wd4st"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.560523 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m5w6w"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.563835 4660 scope.go:117] "RemoveContainer" containerID="ceba46ebcf588474bc34ad4414bcad73d3f43e76710633c48f8bc5e44c2fd2ba" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.564527 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m5w6w"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.567898 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vw7mz"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.570178 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vw7mz"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.585836 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mdtvz"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.591280 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mdtvz"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.591603 4660 scope.go:117] "RemoveContainer" containerID="392675fbb77d7416b4f50ee6c85e28556efd2e02bb8b0d51793c0ee6b3f27507" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.607060 4660 scope.go:117] "RemoveContainer" containerID="b6682ab2c45b4946e69acc5808cc22317776167390825b0f42ce9102c13bfd3d" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.623386 4660 scope.go:117] "RemoveContainer" containerID="328cea9a11a78a7ebdfe0e2ccc44e4ad823f0ab4e88c4015b6b24a09be16949f" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.647463 4660 scope.go:117] "RemoveContainer" containerID="e7c8a7fef523af8f66cb71767450cdbd8b13199705fc4966a68dbb1731d2238d" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.658942 4660 scope.go:117] "RemoveContainer" containerID="3a7d48c8a6db46ee09451ea686168bf1cd076e244fe3d67347ea7176637c8de9" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.672670 4660 scope.go:117] "RemoveContainer" containerID="4fab74a06b7cc6834c7fa30ac94fa28a94ca73eb4c3d1daf55ef28b384c16121" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.689536 4660 scope.go:117] "RemoveContainer" containerID="9c63e8aea470e713620d42e2c24fefd573d0c8d0315538fa1b2a6bd6b7835cad" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.705796 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6mtm"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.711752 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6mtm"] Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.719108 4660 scope.go:117] "RemoveContainer" containerID="eb66690338b478410252df123517159b0f3788ef9d4a644ba0e39eb8e9fe01f8" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.736593 4660 scope.go:117] "RemoveContainer" containerID="19f318bddb3ab862bb442b59e0ca1e3261347e9f441e9fb645d7cf810b52efa3" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.748876 4660 scope.go:117] "RemoveContainer" containerID="7a06e917a8c63a2aa9a3f5148c2a14236aeea6bd19f62e1a21646380cb098bd6" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.767387 4660 scope.go:117] "RemoveContainer" containerID="d421c36b18a478c45100cd4055f8279a9eec728cf9441ee07e86faa96ce39e21" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.798243 4660 scope.go:117] "RemoveContainer" containerID="ca6b3a498863d451a26ad43b9417b7ccafaa8d60fdee5caf31aca2d1ce4c9eb4" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.818660 4660 scope.go:117] "RemoveContainer" containerID="714152624d7f3a029082b06ccc2c68cb37042c99720f30e6c6d9f5e04c78880a" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.838421 4660 scope.go:117] "RemoveContainer" containerID="90c373b115b8e84e0aafc92bd10e1404f6c8d81bf28dadeb06412630b9bbe12f" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.850369 4660 scope.go:117] "RemoveContainer" containerID="8dedc36190a8daa62a9548e928c1fcaee53941d1017cc07debcd67e094c5b977" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.863741 4660 scope.go:117] "RemoveContainer" containerID="5e7d5901485f5a9b1a956e524bf43a31e464d3a9b4cad0c92f9202607507a9c2" Nov 29 07:22:10 crc kubenswrapper[4660]: I1129 07:22:10.883931 4660 scope.go:117] "RemoveContainer" containerID="169ee78f1ac0eb03427003808816545a0f680d135938bff288306480b5392f8c" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.482324 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" event={"ID":"f9482d0d-cad1-43a2-a0f9-523323125ae2","Type":"ContainerStarted","Data":"343d7fb51b8774879182c37b3310f7d6606e3fa509299003ccd1ffdeba3688b8"} Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.482567 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" event={"ID":"f9482d0d-cad1-43a2-a0f9-523323125ae2","Type":"ContainerStarted","Data":"e5ffddc03d2293d5b4e44f72a4f58259986800f065bb4b5225ae3effce8e35d6"} Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.484077 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.487004 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.504144 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4msqn" podStartSLOduration=2.5041247220000002 podStartE2EDuration="2.504124722s" podCreationTimestamp="2025-11-29 07:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:11.498514718 +0000 UTC m=+422.052044657" watchObservedRunningTime="2025-11-29 07:22:11.504124722 +0000 UTC m=+422.057654631" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.699807 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" path="/var/lib/kubelet/pods/0787f5de-a9f4-435c-8553-fcb080d3950b/volumes" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.701079 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" path="/var/lib/kubelet/pods/07c2303f-89f5-4280-8830-05e28e5a1d96/volumes" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.701809 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" path="/var/lib/kubelet/pods/2071aaa8-38a7-47d8-bf67-b3862af09221/volumes" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.703164 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" path="/var/lib/kubelet/pods/3d455272-6d6e-4fa8-8a59-60ddcaf10ab2/volumes" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.703918 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" path="/var/lib/kubelet/pods/6a035a3a-155a-4b6e-ac5c-ca7118e1443d/volumes" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.705021 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" path="/var/lib/kubelet/pods/a157b019-1b17-4d7e-8a47-868b3d24496f/volumes" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.705720 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" path="/var/lib/kubelet/pods/b38d4bdc-266e-423b-89e8-4bea085d5ce7/volumes" Nov 29 07:22:11 crc kubenswrapper[4660]: I1129 07:22:11.706413 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" path="/var/lib/kubelet/pods/c165ea6a-e592-4d7f-b35c-314fd0bf1cbf/volumes" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.047297 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8rvfj"] Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.047701 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.047771 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.047936 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048022 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048087 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048149 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048205 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048264 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048325 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048381 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048439 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048498 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048555 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048658 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048729 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048790 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048846 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.048898 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.048970 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.049049 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.049121 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.049203 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.049313 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.049395 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.049466 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.049536 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.049607 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.049713 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.049797 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.049867 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.051378 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.051459 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="extract-utilities" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.051518 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.051591 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.051669 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.051731 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.051789 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.051842 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.051908 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.051973 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.052042 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052052 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.052061 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052066 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.052073 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052079 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="extract-content" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052221 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0787f5de-a9f4-435c-8553-fcb080d3950b" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052232 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052240 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c165ea6a-e592-4d7f-b35c-314fd0bf1cbf" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052247 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="2071aaa8-38a7-47d8-bf67-b3862af09221" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052256 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d455272-6d6e-4fa8-8a59-60ddcaf10ab2" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052266 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c2303f-89f5-4280-8830-05e28e5a1d96" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052278 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38d4bdc-266e-423b-89e8-4bea085d5ce7" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052286 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="a157b019-1b17-4d7e-8a47-868b3d24496f" containerName="registry-server" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052294 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: E1129 07:22:12.052370 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052377 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.052457 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a035a3a-155a-4b6e-ac5c-ca7118e1443d" containerName="marketplace-operator" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.053039 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.057130 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.062283 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8rvfj"] Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.138682 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-utilities\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.138755 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-catalog-content\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.138846 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-452pv\" (UniqueName: \"kubernetes.io/projected/d65de9ee-1062-4bbe-bfef-1e39897b418f-kube-api-access-452pv\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.240493 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-452pv\" (UniqueName: \"kubernetes.io/projected/d65de9ee-1062-4bbe-bfef-1e39897b418f-kube-api-access-452pv\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.240558 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-utilities\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.240633 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-catalog-content\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.241076 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-catalog-content\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.241220 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-utilities\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.268756 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-452pv\" (UniqueName: \"kubernetes.io/projected/d65de9ee-1062-4bbe-bfef-1e39897b418f-kube-api-access-452pv\") pod \"community-operators-8rvfj\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.374573 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:12 crc kubenswrapper[4660]: I1129 07:22:12.789446 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8rvfj"] Nov 29 07:22:12 crc kubenswrapper[4660]: W1129 07:22:12.797845 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd65de9ee_1062_4bbe_bfef_1e39897b418f.slice/crio-fad20ce2f6e1cf11af6c781b3a892678e80576964eae96dcd53c059c1e236a69 WatchSource:0}: Error finding container fad20ce2f6e1cf11af6c781b3a892678e80576964eae96dcd53c059c1e236a69: Status 404 returned error can't find the container with id fad20ce2f6e1cf11af6c781b3a892678e80576964eae96dcd53c059c1e236a69 Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.048423 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-57l9d"] Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.049847 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.052425 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.067015 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57l9d"] Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.150961 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f67b0c-c303-4c77-84a8-5b3e11bac292-catalog-content\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.151041 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f67b0c-c303-4c77-84a8-5b3e11bac292-utilities\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.151070 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkvc\" (UniqueName: \"kubernetes.io/projected/19f67b0c-c303-4c77-84a8-5b3e11bac292-kube-api-access-dzkvc\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.252391 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f67b0c-c303-4c77-84a8-5b3e11bac292-catalog-content\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.252472 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f67b0c-c303-4c77-84a8-5b3e11bac292-utilities\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.252492 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzkvc\" (UniqueName: \"kubernetes.io/projected/19f67b0c-c303-4c77-84a8-5b3e11bac292-kube-api-access-dzkvc\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.252846 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f67b0c-c303-4c77-84a8-5b3e11bac292-catalog-content\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.252864 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f67b0c-c303-4c77-84a8-5b3e11bac292-utilities\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.274112 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzkvc\" (UniqueName: \"kubernetes.io/projected/19f67b0c-c303-4c77-84a8-5b3e11bac292-kube-api-access-dzkvc\") pod \"redhat-marketplace-57l9d\" (UID: \"19f67b0c-c303-4c77-84a8-5b3e11bac292\") " pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.368362 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.498526 4660 generic.go:334] "Generic (PLEG): container finished" podID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerID="5b6d6a854a1c2d8c1cdde43b5834346b03309edb3c7a797dd30468103d241c95" exitCode=0 Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.499950 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvfj" event={"ID":"d65de9ee-1062-4bbe-bfef-1e39897b418f","Type":"ContainerDied","Data":"5b6d6a854a1c2d8c1cdde43b5834346b03309edb3c7a797dd30468103d241c95"} Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.499973 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvfj" event={"ID":"d65de9ee-1062-4bbe-bfef-1e39897b418f","Type":"ContainerStarted","Data":"fad20ce2f6e1cf11af6c781b3a892678e80576964eae96dcd53c059c1e236a69"} Nov 29 07:22:13 crc kubenswrapper[4660]: I1129 07:22:13.797175 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57l9d"] Nov 29 07:22:13 crc kubenswrapper[4660]: W1129 07:22:13.803112 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19f67b0c_c303_4c77_84a8_5b3e11bac292.slice/crio-3be444839e88aa1488929ffea3bbe7f18417f5d2273ebf13d92508f0821567bd WatchSource:0}: Error finding container 3be444839e88aa1488929ffea3bbe7f18417f5d2273ebf13d92508f0821567bd: Status 404 returned error can't find the container with id 3be444839e88aa1488929ffea3bbe7f18417f5d2273ebf13d92508f0821567bd Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.450136 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l9mbq"] Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.452120 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.457643 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.461816 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9mbq"] Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.507430 4660 generic.go:334] "Generic (PLEG): container finished" podID="19f67b0c-c303-4c77-84a8-5b3e11bac292" containerID="0fd063ddc72ded4bf585711de9a076cc6b7ab0a1360de515763c77a59259e83c" exitCode=0 Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.507497 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57l9d" event={"ID":"19f67b0c-c303-4c77-84a8-5b3e11bac292","Type":"ContainerDied","Data":"0fd063ddc72ded4bf585711de9a076cc6b7ab0a1360de515763c77a59259e83c"} Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.507526 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57l9d" event={"ID":"19f67b0c-c303-4c77-84a8-5b3e11bac292","Type":"ContainerStarted","Data":"3be444839e88aa1488929ffea3bbe7f18417f5d2273ebf13d92508f0821567bd"} Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.515585 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvfj" event={"ID":"d65de9ee-1062-4bbe-bfef-1e39897b418f","Type":"ContainerStarted","Data":"d6c4b38436f96ca33a7c6e7f8d4ac50f13f94afbada97ea28791e862c3c99296"} Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.570834 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-splzn\" (UniqueName: \"kubernetes.io/projected/eacee01a-4708-4371-8721-a6ae49dd8f01-kube-api-access-splzn\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.570886 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacee01a-4708-4371-8721-a6ae49dd8f01-catalog-content\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.570918 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacee01a-4708-4371-8721-a6ae49dd8f01-utilities\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.672270 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacee01a-4708-4371-8721-a6ae49dd8f01-utilities\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.672656 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-splzn\" (UniqueName: \"kubernetes.io/projected/eacee01a-4708-4371-8721-a6ae49dd8f01-kube-api-access-splzn\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.672684 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacee01a-4708-4371-8721-a6ae49dd8f01-catalog-content\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.673184 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacee01a-4708-4371-8721-a6ae49dd8f01-catalog-content\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.673444 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacee01a-4708-4371-8721-a6ae49dd8f01-utilities\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.712485 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-splzn\" (UniqueName: \"kubernetes.io/projected/eacee01a-4708-4371-8721-a6ae49dd8f01-kube-api-access-splzn\") pod \"redhat-operators-l9mbq\" (UID: \"eacee01a-4708-4371-8721-a6ae49dd8f01\") " pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:14 crc kubenswrapper[4660]: I1129 07:22:14.765559 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.145038 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9mbq"] Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.522250 4660 generic.go:334] "Generic (PLEG): container finished" podID="eacee01a-4708-4371-8721-a6ae49dd8f01" containerID="4a9a16200da1b4dcc8bef81357e54ad2b0b326113f259535f4c4ce7a19c73343" exitCode=0 Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.522370 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9mbq" event={"ID":"eacee01a-4708-4371-8721-a6ae49dd8f01","Type":"ContainerDied","Data":"4a9a16200da1b4dcc8bef81357e54ad2b0b326113f259535f4c4ce7a19c73343"} Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.522546 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9mbq" event={"ID":"eacee01a-4708-4371-8721-a6ae49dd8f01","Type":"ContainerStarted","Data":"a7c502ae10a66619afb2959bb8d24062c2dd80951adf121de25a62c2265faa91"} Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.531558 4660 generic.go:334] "Generic (PLEG): container finished" podID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerID="d6c4b38436f96ca33a7c6e7f8d4ac50f13f94afbada97ea28791e862c3c99296" exitCode=0 Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.532476 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvfj" event={"ID":"d65de9ee-1062-4bbe-bfef-1e39897b418f","Type":"ContainerDied","Data":"d6c4b38436f96ca33a7c6e7f8d4ac50f13f94afbada97ea28791e862c3c99296"} Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.847050 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4d266"] Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.847958 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.851369 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.860893 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d266"] Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.892459 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-utilities\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.892508 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-catalog-content\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.892541 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m869g\" (UniqueName: \"kubernetes.io/projected/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-kube-api-access-m869g\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.994053 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m869g\" (UniqueName: \"kubernetes.io/projected/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-kube-api-access-m869g\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.994389 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-utilities\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.994421 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-catalog-content\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.994836 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-catalog-content\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:15 crc kubenswrapper[4660]: I1129 07:22:15.995290 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-utilities\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.016317 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m869g\" (UniqueName: \"kubernetes.io/projected/c1d8cc32-31a1-4eb6-866d-ce7bc2082570-kube-api-access-m869g\") pod \"certified-operators-4d266\" (UID: \"c1d8cc32-31a1-4eb6-866d-ce7bc2082570\") " pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.168646 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.539465 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvfj" event={"ID":"d65de9ee-1062-4bbe-bfef-1e39897b418f","Type":"ContainerStarted","Data":"789968d8bc16da6606e502f055d17f452d2ab7fa60020c6c7aa28dd0b19aa7be"} Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.541196 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9mbq" event={"ID":"eacee01a-4708-4371-8721-a6ae49dd8f01","Type":"ContainerStarted","Data":"5a0d68ebda75d0b3ad0c13934ea0bda41cded770d3d945fa2e807bf5d6f79840"} Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.543037 4660 generic.go:334] "Generic (PLEG): container finished" podID="19f67b0c-c303-4c77-84a8-5b3e11bac292" containerID="78888cdb549b39714df28eb6e30dea99bb1b06a20be0154570eaf747a7023c8e" exitCode=0 Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.543074 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57l9d" event={"ID":"19f67b0c-c303-4c77-84a8-5b3e11bac292","Type":"ContainerDied","Data":"78888cdb549b39714df28eb6e30dea99bb1b06a20be0154570eaf747a7023c8e"} Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.555462 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8rvfj" podStartSLOduration=2.019314329 podStartE2EDuration="4.555440608s" podCreationTimestamp="2025-11-29 07:22:12 +0000 UTC" firstStartedPulling="2025-11-29 07:22:13.501354073 +0000 UTC m=+424.054883972" lastFinishedPulling="2025-11-29 07:22:16.037480352 +0000 UTC m=+426.591010251" observedRunningTime="2025-11-29 07:22:16.55380224 +0000 UTC m=+427.107332139" watchObservedRunningTime="2025-11-29 07:22:16.555440608 +0000 UTC m=+427.108970507" Nov 29 07:22:16 crc kubenswrapper[4660]: W1129 07:22:16.624482 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1d8cc32_31a1_4eb6_866d_ce7bc2082570.slice/crio-7715cd951d2f06204e6a329095dc7d20ca5178150c51a8be7a81c1e39255bdb6 WatchSource:0}: Error finding container 7715cd951d2f06204e6a329095dc7d20ca5178150c51a8be7a81c1e39255bdb6: Status 404 returned error can't find the container with id 7715cd951d2f06204e6a329095dc7d20ca5178150c51a8be7a81c1e39255bdb6 Nov 29 07:22:16 crc kubenswrapper[4660]: I1129 07:22:16.625323 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d266"] Nov 29 07:22:17 crc kubenswrapper[4660]: I1129 07:22:17.548906 4660 generic.go:334] "Generic (PLEG): container finished" podID="eacee01a-4708-4371-8721-a6ae49dd8f01" containerID="5a0d68ebda75d0b3ad0c13934ea0bda41cded770d3d945fa2e807bf5d6f79840" exitCode=0 Nov 29 07:22:17 crc kubenswrapper[4660]: I1129 07:22:17.549126 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9mbq" event={"ID":"eacee01a-4708-4371-8721-a6ae49dd8f01","Type":"ContainerDied","Data":"5a0d68ebda75d0b3ad0c13934ea0bda41cded770d3d945fa2e807bf5d6f79840"} Nov 29 07:22:17 crc kubenswrapper[4660]: I1129 07:22:17.557262 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57l9d" event={"ID":"19f67b0c-c303-4c77-84a8-5b3e11bac292","Type":"ContainerStarted","Data":"cde4fa965e18de4270303afdeb34c73e317889ca8471f4dd8f44a88def382934"} Nov 29 07:22:17 crc kubenswrapper[4660]: I1129 07:22:17.559006 4660 generic.go:334] "Generic (PLEG): container finished" podID="c1d8cc32-31a1-4eb6-866d-ce7bc2082570" containerID="8614b83cf98f8e43925c4f7870fa7a657a744d8855192b3f8042d0a90a5a5bc0" exitCode=0 Nov 29 07:22:17 crc kubenswrapper[4660]: I1129 07:22:17.559075 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d266" event={"ID":"c1d8cc32-31a1-4eb6-866d-ce7bc2082570","Type":"ContainerDied","Data":"8614b83cf98f8e43925c4f7870fa7a657a744d8855192b3f8042d0a90a5a5bc0"} Nov 29 07:22:17 crc kubenswrapper[4660]: I1129 07:22:17.559095 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d266" event={"ID":"c1d8cc32-31a1-4eb6-866d-ce7bc2082570","Type":"ContainerStarted","Data":"7715cd951d2f06204e6a329095dc7d20ca5178150c51a8be7a81c1e39255bdb6"} Nov 29 07:22:17 crc kubenswrapper[4660]: I1129 07:22:17.594456 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-57l9d" podStartSLOduration=2.021774472 podStartE2EDuration="4.594436381s" podCreationTimestamp="2025-11-29 07:22:13 +0000 UTC" firstStartedPulling="2025-11-29 07:22:14.509513162 +0000 UTC m=+425.063043071" lastFinishedPulling="2025-11-29 07:22:17.082175081 +0000 UTC m=+427.635704980" observedRunningTime="2025-11-29 07:22:17.588498407 +0000 UTC m=+428.142028316" watchObservedRunningTime="2025-11-29 07:22:17.594436381 +0000 UTC m=+428.147966280" Nov 29 07:22:18 crc kubenswrapper[4660]: I1129 07:22:18.566385 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d266" event={"ID":"c1d8cc32-31a1-4eb6-866d-ce7bc2082570","Type":"ContainerStarted","Data":"b0e2843c9ffd66ccd2483768e750d6cd2022c92997ac3201c287a4c9ab654606"} Nov 29 07:22:18 crc kubenswrapper[4660]: I1129 07:22:18.569824 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9mbq" event={"ID":"eacee01a-4708-4371-8721-a6ae49dd8f01","Type":"ContainerStarted","Data":"2472914743c4622547d7b54129f67f226528f739f72276ae22f647e7982b1850"} Nov 29 07:22:18 crc kubenswrapper[4660]: I1129 07:22:18.602792 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l9mbq" podStartSLOduration=2.109581864 podStartE2EDuration="4.602771256s" podCreationTimestamp="2025-11-29 07:22:14 +0000 UTC" firstStartedPulling="2025-11-29 07:22:15.611138899 +0000 UTC m=+426.164668798" lastFinishedPulling="2025-11-29 07:22:18.104328291 +0000 UTC m=+428.657858190" observedRunningTime="2025-11-29 07:22:18.599926432 +0000 UTC m=+429.153456331" watchObservedRunningTime="2025-11-29 07:22:18.602771256 +0000 UTC m=+429.156301155" Nov 29 07:22:19 crc kubenswrapper[4660]: I1129 07:22:19.577950 4660 generic.go:334] "Generic (PLEG): container finished" podID="c1d8cc32-31a1-4eb6-866d-ce7bc2082570" containerID="b0e2843c9ffd66ccd2483768e750d6cd2022c92997ac3201c287a4c9ab654606" exitCode=0 Nov 29 07:22:19 crc kubenswrapper[4660]: I1129 07:22:19.578007 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d266" event={"ID":"c1d8cc32-31a1-4eb6-866d-ce7bc2082570","Type":"ContainerDied","Data":"b0e2843c9ffd66ccd2483768e750d6cd2022c92997ac3201c287a4c9ab654606"} Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.186110 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hwjdf"] Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.187201 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.246772 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hwjdf"] Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264156 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e755aa46-ee10-40f0-9f13-bf7933f3d169-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264262 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-bound-sa-token\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264322 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwr74\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-kube-api-access-fwr74\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264368 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e755aa46-ee10-40f0-9f13-bf7933f3d169-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264411 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264455 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-registry-tls\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264491 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e755aa46-ee10-40f0-9f13-bf7933f3d169-registry-certificates\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.264522 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e755aa46-ee10-40f0-9f13-bf7933f3d169-trusted-ca\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.284708 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365131 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-registry-tls\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365180 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e755aa46-ee10-40f0-9f13-bf7933f3d169-registry-certificates\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365201 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e755aa46-ee10-40f0-9f13-bf7933f3d169-trusted-ca\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365225 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e755aa46-ee10-40f0-9f13-bf7933f3d169-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365252 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-bound-sa-token\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365287 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwr74\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-kube-api-access-fwr74\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365306 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e755aa46-ee10-40f0-9f13-bf7933f3d169-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.365730 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e755aa46-ee10-40f0-9f13-bf7933f3d169-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.367373 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e755aa46-ee10-40f0-9f13-bf7933f3d169-registry-certificates\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.367715 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e755aa46-ee10-40f0-9f13-bf7933f3d169-trusted-ca\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.371415 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-registry-tls\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.381014 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e755aa46-ee10-40f0-9f13-bf7933f3d169-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.383107 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-bound-sa-token\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.383458 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwr74\" (UniqueName: \"kubernetes.io/projected/e755aa46-ee10-40f0-9f13-bf7933f3d169-kube-api-access-fwr74\") pod \"image-registry-66df7c8f76-hwjdf\" (UID: \"e755aa46-ee10-40f0-9f13-bf7933f3d169\") " pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.507365 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:21 crc kubenswrapper[4660]: I1129 07:22:21.922335 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hwjdf"] Nov 29 07:22:21 crc kubenswrapper[4660]: W1129 07:22:21.939502 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode755aa46_ee10_40f0_9f13_bf7933f3d169.slice/crio-5216c0a23e6316461ead5403cd3453fe9e2a42fb6736efa2dccd9d1e7df366a1 WatchSource:0}: Error finding container 5216c0a23e6316461ead5403cd3453fe9e2a42fb6736efa2dccd9d1e7df366a1: Status 404 returned error can't find the container with id 5216c0a23e6316461ead5403cd3453fe9e2a42fb6736efa2dccd9d1e7df366a1 Nov 29 07:22:22 crc kubenswrapper[4660]: I1129 07:22:22.374714 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:22 crc kubenswrapper[4660]: I1129 07:22:22.375316 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:22 crc kubenswrapper[4660]: I1129 07:22:22.441715 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:22 crc kubenswrapper[4660]: I1129 07:22:22.617077 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" event={"ID":"e755aa46-ee10-40f0-9f13-bf7933f3d169","Type":"ContainerStarted","Data":"5216c0a23e6316461ead5403cd3453fe9e2a42fb6736efa2dccd9d1e7df366a1"} Nov 29 07:22:22 crc kubenswrapper[4660]: I1129 07:22:22.671582 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.369518 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.369576 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.413060 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.622998 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d266" event={"ID":"c1d8cc32-31a1-4eb6-866d-ce7bc2082570","Type":"ContainerStarted","Data":"154e40f1c69fc4e3a1685719560e6da36b413c5a75e104005f1df85026933b59"} Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.628517 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" event={"ID":"e755aa46-ee10-40f0-9f13-bf7933f3d169","Type":"ContainerStarted","Data":"27e68f3c853df6ae58a4a4f4b6a7d76fe1239ae71b363a7c9a67f95992fe571e"} Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.628869 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.645176 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4d266" podStartSLOduration=3.639555604 podStartE2EDuration="8.645154441s" podCreationTimestamp="2025-11-29 07:22:15 +0000 UTC" firstStartedPulling="2025-11-29 07:22:17.560186379 +0000 UTC m=+428.113716278" lastFinishedPulling="2025-11-29 07:22:22.565785206 +0000 UTC m=+433.119315115" observedRunningTime="2025-11-29 07:22:23.639975379 +0000 UTC m=+434.193505278" watchObservedRunningTime="2025-11-29 07:22:23.645154441 +0000 UTC m=+434.198684350" Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.673309 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-57l9d" Nov 29 07:22:23 crc kubenswrapper[4660]: I1129 07:22:23.684432 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" podStartSLOduration=2.68441508 podStartE2EDuration="2.68441508s" podCreationTimestamp="2025-11-29 07:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:23.682889176 +0000 UTC m=+434.236419075" watchObservedRunningTime="2025-11-29 07:22:23.68441508 +0000 UTC m=+434.237944979" Nov 29 07:22:24 crc kubenswrapper[4660]: I1129 07:22:24.766351 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:24 crc kubenswrapper[4660]: I1129 07:22:24.766700 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:24 crc kubenswrapper[4660]: I1129 07:22:24.806567 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps"] Nov 29 07:22:24 crc kubenswrapper[4660]: I1129 07:22:24.806809 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" podUID="38e424e7-1229-4a2e-9766-850aa93cec06" containerName="route-controller-manager" containerID="cri-o://33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7" gracePeriod=30 Nov 29 07:22:24 crc kubenswrapper[4660]: I1129 07:22:24.833487 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.316984 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.416972 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4j4g\" (UniqueName: \"kubernetes.io/projected/38e424e7-1229-4a2e-9766-850aa93cec06-kube-api-access-l4j4g\") pod \"38e424e7-1229-4a2e-9766-850aa93cec06\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.417042 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e424e7-1229-4a2e-9766-850aa93cec06-serving-cert\") pod \"38e424e7-1229-4a2e-9766-850aa93cec06\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.417068 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-client-ca\") pod \"38e424e7-1229-4a2e-9766-850aa93cec06\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.417139 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-config\") pod \"38e424e7-1229-4a2e-9766-850aa93cec06\" (UID: \"38e424e7-1229-4a2e-9766-850aa93cec06\") " Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.418037 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-config" (OuterVolumeSpecName: "config") pod "38e424e7-1229-4a2e-9766-850aa93cec06" (UID: "38e424e7-1229-4a2e-9766-850aa93cec06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.418223 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-client-ca" (OuterVolumeSpecName: "client-ca") pod "38e424e7-1229-4a2e-9766-850aa93cec06" (UID: "38e424e7-1229-4a2e-9766-850aa93cec06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.422736 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e424e7-1229-4a2e-9766-850aa93cec06-kube-api-access-l4j4g" (OuterVolumeSpecName: "kube-api-access-l4j4g") pod "38e424e7-1229-4a2e-9766-850aa93cec06" (UID: "38e424e7-1229-4a2e-9766-850aa93cec06"). InnerVolumeSpecName "kube-api-access-l4j4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.422825 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e424e7-1229-4a2e-9766-850aa93cec06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38e424e7-1229-4a2e-9766-850aa93cec06" (UID: "38e424e7-1229-4a2e-9766-850aa93cec06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.518182 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.518215 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4j4g\" (UniqueName: \"kubernetes.io/projected/38e424e7-1229-4a2e-9766-850aa93cec06-kube-api-access-l4j4g\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.518229 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e424e7-1229-4a2e-9766-850aa93cec06-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.518241 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38e424e7-1229-4a2e-9766-850aa93cec06-client-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.639599 4660 generic.go:334] "Generic (PLEG): container finished" podID="38e424e7-1229-4a2e-9766-850aa93cec06" containerID="33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7" exitCode=0 Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.639913 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.640423 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" event={"ID":"38e424e7-1229-4a2e-9766-850aa93cec06","Type":"ContainerDied","Data":"33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7"} Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.640455 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps" event={"ID":"38e424e7-1229-4a2e-9766-850aa93cec06","Type":"ContainerDied","Data":"3d868921e2e52dc5668e7948fd38f95cb4fa8fce2dbaebf58240b3399e32554c"} Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.640470 4660 scope.go:117] "RemoveContainer" containerID="33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.656165 4660 scope.go:117] "RemoveContainer" containerID="33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7" Nov 29 07:22:25 crc kubenswrapper[4660]: E1129 07:22:25.656578 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7\": container with ID starting with 33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7 not found: ID does not exist" containerID="33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.656634 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7"} err="failed to get container status \"33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7\": rpc error: code = NotFound desc = could not find container \"33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7\": container with ID starting with 33594c817e0bb31930bbeea57a303fa10eca170cc00dd5a2b03ea377279a64c7 not found: ID does not exist" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.668302 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps"] Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.673619 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866f46fcdc-p8bps"] Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.683582 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l9mbq" Nov 29 07:22:25 crc kubenswrapper[4660]: I1129 07:22:25.708056 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e424e7-1229-4a2e-9766-850aa93cec06" path="/var/lib/kubelet/pods/38e424e7-1229-4a2e-9766-850aa93cec06/volumes" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.169828 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.170915 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.211538 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.793831 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z"] Nov 29 07:22:26 crc kubenswrapper[4660]: E1129 07:22:26.794774 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e424e7-1229-4a2e-9766-850aa93cec06" containerName="route-controller-manager" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.794878 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e424e7-1229-4a2e-9766-850aa93cec06" containerName="route-controller-manager" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.795100 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e424e7-1229-4a2e-9766-850aa93cec06" containerName="route-controller-manager" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.795612 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.801018 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.801244 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.801432 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.801482 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.801579 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.801796 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.809181 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z"] Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.833196 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b89005-32d4-4ea7-96ce-65ab3499dd1e-serving-cert\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.833507 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s85h\" (UniqueName: \"kubernetes.io/projected/84b89005-32d4-4ea7-96ce-65ab3499dd1e-kube-api-access-7s85h\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.833709 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b89005-32d4-4ea7-96ce-65ab3499dd1e-client-ca\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.833781 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b89005-32d4-4ea7-96ce-65ab3499dd1e-config\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.935019 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b89005-32d4-4ea7-96ce-65ab3499dd1e-serving-cert\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.935060 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s85h\" (UniqueName: \"kubernetes.io/projected/84b89005-32d4-4ea7-96ce-65ab3499dd1e-kube-api-access-7s85h\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.935099 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b89005-32d4-4ea7-96ce-65ab3499dd1e-client-ca\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.935133 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b89005-32d4-4ea7-96ce-65ab3499dd1e-config\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.936355 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b89005-32d4-4ea7-96ce-65ab3499dd1e-client-ca\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.936490 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b89005-32d4-4ea7-96ce-65ab3499dd1e-config\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.941302 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b89005-32d4-4ea7-96ce-65ab3499dd1e-serving-cert\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:26 crc kubenswrapper[4660]: I1129 07:22:26.949614 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s85h\" (UniqueName: \"kubernetes.io/projected/84b89005-32d4-4ea7-96ce-65ab3499dd1e-kube-api-access-7s85h\") pod \"route-controller-manager-54d4cc6664-5kg8z\" (UID: \"84b89005-32d4-4ea7-96ce-65ab3499dd1e\") " pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:27 crc kubenswrapper[4660]: I1129 07:22:27.112745 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:27 crc kubenswrapper[4660]: I1129 07:22:27.534319 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z"] Nov 29 07:22:27 crc kubenswrapper[4660]: W1129 07:22:27.540518 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84b89005_32d4_4ea7_96ce_65ab3499dd1e.slice/crio-7e30cbab991b37a0eec7e86583cced4892ece90f51e7286085e3ce9eb0df62ae WatchSource:0}: Error finding container 7e30cbab991b37a0eec7e86583cced4892ece90f51e7286085e3ce9eb0df62ae: Status 404 returned error can't find the container with id 7e30cbab991b37a0eec7e86583cced4892ece90f51e7286085e3ce9eb0df62ae Nov 29 07:22:27 crc kubenswrapper[4660]: I1129 07:22:27.652195 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" event={"ID":"84b89005-32d4-4ea7-96ce-65ab3499dd1e","Type":"ContainerStarted","Data":"7e30cbab991b37a0eec7e86583cced4892ece90f51e7286085e3ce9eb0df62ae"} Nov 29 07:22:27 crc kubenswrapper[4660]: I1129 07:22:27.699458 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4d266" Nov 29 07:22:28 crc kubenswrapper[4660]: I1129 07:22:28.657834 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" event={"ID":"84b89005-32d4-4ea7-96ce-65ab3499dd1e","Type":"ContainerStarted","Data":"c11c160f41d4cfb2136ed06f21799fb7d47385e29baf8028098ccac425b8f189"} Nov 29 07:22:28 crc kubenswrapper[4660]: I1129 07:22:28.679259 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" podStartSLOduration=4.6792385119999995 podStartE2EDuration="4.679238512s" podCreationTimestamp="2025-11-29 07:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:22:28.678694496 +0000 UTC m=+439.232224395" watchObservedRunningTime="2025-11-29 07:22:28.679238512 +0000 UTC m=+439.232768411" Nov 29 07:22:29 crc kubenswrapper[4660]: I1129 07:22:29.663305 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:29 crc kubenswrapper[4660]: I1129 07:22:29.668541 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54d4cc6664-5kg8z" Nov 29 07:22:35 crc kubenswrapper[4660]: I1129 07:22:35.500660 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:22:35 crc kubenswrapper[4660]: I1129 07:22:35.500927 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:22:35 crc kubenswrapper[4660]: I1129 07:22:35.500972 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:22:35 crc kubenswrapper[4660]: I1129 07:22:35.501490 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"989dd0952d000cd4f49140f82c7a75fb3526482c195db59f4cc4c65df85512c5"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:22:35 crc kubenswrapper[4660]: I1129 07:22:35.501552 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://989dd0952d000cd4f49140f82c7a75fb3526482c195db59f4cc4c65df85512c5" gracePeriod=600 Nov 29 07:22:37 crc kubenswrapper[4660]: I1129 07:22:37.715042 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="989dd0952d000cd4f49140f82c7a75fb3526482c195db59f4cc4c65df85512c5" exitCode=0 Nov 29 07:22:37 crc kubenswrapper[4660]: I1129 07:22:37.715098 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"989dd0952d000cd4f49140f82c7a75fb3526482c195db59f4cc4c65df85512c5"} Nov 29 07:22:37 crc kubenswrapper[4660]: I1129 07:22:37.715484 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"21fcfde41d8f0eb843a0ec8e8ff710f45a74b90e8f4c10514aff34edbeca29d5"} Nov 29 07:22:37 crc kubenswrapper[4660]: I1129 07:22:37.715506 4660 scope.go:117] "RemoveContainer" containerID="1c9f6db638eabe7e0afe5fbc95d1a11b59f438e399605045578ea256ee882d21" Nov 29 07:22:41 crc kubenswrapper[4660]: I1129 07:22:41.515509 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hwjdf" Nov 29 07:22:41 crc kubenswrapper[4660]: I1129 07:22:41.577603 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44llw"] Nov 29 07:23:06 crc kubenswrapper[4660]: I1129 07:23:06.630221 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" podUID="d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" containerName="registry" containerID="cri-o://4100398878d4b4dbc55fa1a57eb652af8c008137faa0512a31c34b781ce187ec" gracePeriod=30 Nov 29 07:23:06 crc kubenswrapper[4660]: I1129 07:23:06.897682 4660 generic.go:334] "Generic (PLEG): container finished" podID="d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" containerID="4100398878d4b4dbc55fa1a57eb652af8c008137faa0512a31c34b781ce187ec" exitCode=0 Nov 29 07:23:06 crc kubenswrapper[4660]: I1129 07:23:06.897812 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" event={"ID":"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0","Type":"ContainerDied","Data":"4100398878d4b4dbc55fa1a57eb652af8c008137faa0512a31c34b781ce187ec"} Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.026718 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.177895 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-tls\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.177964 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-ca-trust-extracted\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.178015 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-installation-pull-secrets\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.178037 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vvx5\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-kube-api-access-7vvx5\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.178070 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-certificates\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.178085 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-trusted-ca\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.178111 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-bound-sa-token\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.178310 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\" (UID: \"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0\") " Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.179046 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.179115 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.184085 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-kube-api-access-7vvx5" (OuterVolumeSpecName: "kube-api-access-7vvx5") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "kube-api-access-7vvx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.184188 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.186427 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.195124 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.195955 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.200303 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" (UID: "d038381e-2b8e-4b9d-8ca4-301d2ecefcd0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.279675 4660 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.279715 4660 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.279731 4660 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.279746 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vvx5\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-kube-api-access-7vvx5\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.279757 4660 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.279769 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.279780 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.904007 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" event={"ID":"d038381e-2b8e-4b9d-8ca4-301d2ecefcd0","Type":"ContainerDied","Data":"8fb12781113fa44f111cd1c33d906719c775930e1e9b22aeb8b34c3997013226"} Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.904056 4660 scope.go:117] "RemoveContainer" containerID="4100398878d4b4dbc55fa1a57eb652af8c008137faa0512a31c34b781ce187ec" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.904140 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44llw" Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.927572 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44llw"] Nov 29 07:23:07 crc kubenswrapper[4660]: I1129 07:23:07.931210 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44llw"] Nov 29 07:23:09 crc kubenswrapper[4660]: I1129 07:23:09.706458 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" path="/var/lib/kubelet/pods/d038381e-2b8e-4b9d-8ca4-301d2ecefcd0/volumes" Nov 29 07:25:05 crc kubenswrapper[4660]: I1129 07:25:05.500439 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:25:05 crc kubenswrapper[4660]: I1129 07:25:05.501153 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:25:35 crc kubenswrapper[4660]: I1129 07:25:35.500875 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:25:35 crc kubenswrapper[4660]: I1129 07:25:35.501457 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.500810 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.502855 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.503063 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.504055 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21fcfde41d8f0eb843a0ec8e8ff710f45a74b90e8f4c10514aff34edbeca29d5"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.504319 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://21fcfde41d8f0eb843a0ec8e8ff710f45a74b90e8f4c10514aff34edbeca29d5" gracePeriod=600 Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.912300 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="21fcfde41d8f0eb843a0ec8e8ff710f45a74b90e8f4c10514aff34edbeca29d5" exitCode=0 Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.912372 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"21fcfde41d8f0eb843a0ec8e8ff710f45a74b90e8f4c10514aff34edbeca29d5"} Nov 29 07:26:05 crc kubenswrapper[4660]: I1129 07:26:05.912654 4660 scope.go:117] "RemoveContainer" containerID="989dd0952d000cd4f49140f82c7a75fb3526482c195db59f4cc4c65df85512c5" Nov 29 07:26:06 crc kubenswrapper[4660]: I1129 07:26:06.919498 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"ba7bcb77e4d299d679fd34242a1b77b4792c3db7cdb7365569436d0dd85e0583"} Nov 29 07:28:02 crc kubenswrapper[4660]: I1129 07:28:02.254705 4660 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 29 07:28:05 crc kubenswrapper[4660]: I1129 07:28:05.500830 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:28:05 crc kubenswrapper[4660]: I1129 07:28:05.501163 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.006907 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s4hsk"] Nov 29 07:28:20 crc kubenswrapper[4660]: E1129 07:28:20.007592 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" containerName="registry" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.007603 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" containerName="registry" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.007723 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d038381e-2b8e-4b9d-8ca4-301d2ecefcd0" containerName="registry" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.008068 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.010216 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.013724 4660 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qqswk" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.014019 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-vsxjs"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.014170 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.014814 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-vsxjs" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.017936 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s4hsk"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.022283 4660 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hk77d" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.058773 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b4x7w"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.059442 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.066162 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-vsxjs"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.067826 4660 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-pxxkh" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.099708 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvkmj\" (UniqueName: \"kubernetes.io/projected/bb0d8d41-b2d2-432b-865f-0069bd153d0a-kube-api-access-dvkmj\") pod \"cert-manager-cainjector-7f985d654d-s4hsk\" (UID: \"bb0d8d41-b2d2-432b-865f-0069bd153d0a\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.099790 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdgb\" (UniqueName: \"kubernetes.io/projected/c7d3889b-9b53-40ae-9a2e-39e7080e11c9-kube-api-access-pcdgb\") pod \"cert-manager-5b446d88c5-vsxjs\" (UID: \"c7d3889b-9b53-40ae-9a2e-39e7080e11c9\") " pod="cert-manager/cert-manager-5b446d88c5-vsxjs" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.099827 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghgj\" (UniqueName: \"kubernetes.io/projected/fd8f5350-5025-49b7-85c6-5f7c1d5724a7-kube-api-access-mghgj\") pod \"cert-manager-webhook-5655c58dd6-b4x7w\" (UID: \"fd8f5350-5025-49b7-85c6-5f7c1d5724a7\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.100386 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b4x7w"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.200758 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mghgj\" (UniqueName: \"kubernetes.io/projected/fd8f5350-5025-49b7-85c6-5f7c1d5724a7-kube-api-access-mghgj\") pod \"cert-manager-webhook-5655c58dd6-b4x7w\" (UID: \"fd8f5350-5025-49b7-85c6-5f7c1d5724a7\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.200983 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvkmj\" (UniqueName: \"kubernetes.io/projected/bb0d8d41-b2d2-432b-865f-0069bd153d0a-kube-api-access-dvkmj\") pod \"cert-manager-cainjector-7f985d654d-s4hsk\" (UID: \"bb0d8d41-b2d2-432b-865f-0069bd153d0a\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.201162 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdgb\" (UniqueName: \"kubernetes.io/projected/c7d3889b-9b53-40ae-9a2e-39e7080e11c9-kube-api-access-pcdgb\") pod \"cert-manager-5b446d88c5-vsxjs\" (UID: \"c7d3889b-9b53-40ae-9a2e-39e7080e11c9\") " pod="cert-manager/cert-manager-5b446d88c5-vsxjs" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.222533 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mghgj\" (UniqueName: \"kubernetes.io/projected/fd8f5350-5025-49b7-85c6-5f7c1d5724a7-kube-api-access-mghgj\") pod \"cert-manager-webhook-5655c58dd6-b4x7w\" (UID: \"fd8f5350-5025-49b7-85c6-5f7c1d5724a7\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.222562 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdgb\" (UniqueName: \"kubernetes.io/projected/c7d3889b-9b53-40ae-9a2e-39e7080e11c9-kube-api-access-pcdgb\") pod \"cert-manager-5b446d88c5-vsxjs\" (UID: \"c7d3889b-9b53-40ae-9a2e-39e7080e11c9\") " pod="cert-manager/cert-manager-5b446d88c5-vsxjs" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.229861 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvkmj\" (UniqueName: \"kubernetes.io/projected/bb0d8d41-b2d2-432b-865f-0069bd153d0a-kube-api-access-dvkmj\") pod \"cert-manager-cainjector-7f985d654d-s4hsk\" (UID: \"bb0d8d41-b2d2-432b-865f-0069bd153d0a\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.326525 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.332796 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-vsxjs" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.374250 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.567458 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-vsxjs"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.583482 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.620673 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s4hsk"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.663679 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b4x7w"] Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.681169 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-vsxjs" event={"ID":"c7d3889b-9b53-40ae-9a2e-39e7080e11c9","Type":"ContainerStarted","Data":"acf6875dd936ec6211f92628fc58b6bcc77e17b534e7a026870367899ae70aee"} Nov 29 07:28:20 crc kubenswrapper[4660]: I1129 07:28:20.682933 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" event={"ID":"bb0d8d41-b2d2-432b-865f-0069bd153d0a","Type":"ContainerStarted","Data":"87323d9c8bb5320073c1af72d2f6fb16cc897f67fac3d6e3e1bfc31a2bda71bf"} Nov 29 07:28:20 crc kubenswrapper[4660]: W1129 07:28:20.686092 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd8f5350_5025_49b7_85c6_5f7c1d5724a7.slice/crio-5c91f9427110c53d720e46d5e7d6a0402f9c81c269ceddafa3a6857fdc1821e3 WatchSource:0}: Error finding container 5c91f9427110c53d720e46d5e7d6a0402f9c81c269ceddafa3a6857fdc1821e3: Status 404 returned error can't find the container with id 5c91f9427110c53d720e46d5e7d6a0402f9c81c269ceddafa3a6857fdc1821e3 Nov 29 07:28:21 crc kubenswrapper[4660]: I1129 07:28:21.688925 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" event={"ID":"fd8f5350-5025-49b7-85c6-5f7c1d5724a7","Type":"ContainerStarted","Data":"5c91f9427110c53d720e46d5e7d6a0402f9c81c269ceddafa3a6857fdc1821e3"} Nov 29 07:28:23 crc kubenswrapper[4660]: I1129 07:28:23.700380 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-vsxjs" event={"ID":"c7d3889b-9b53-40ae-9a2e-39e7080e11c9","Type":"ContainerStarted","Data":"3901ede5a71da744fee4a3875871cffd33b080bf513b9806baa12887b03e7d5a"} Nov 29 07:28:23 crc kubenswrapper[4660]: I1129 07:28:23.705106 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" event={"ID":"bb0d8d41-b2d2-432b-865f-0069bd153d0a","Type":"ContainerStarted","Data":"44fe5557815d64041049fa062fd52413190a1a698e9709ad679c85255a3175fc"} Nov 29 07:28:23 crc kubenswrapper[4660]: I1129 07:28:23.719201 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-vsxjs" podStartSLOduration=1.958259271 podStartE2EDuration="4.719178163s" podCreationTimestamp="2025-11-29 07:28:19 +0000 UTC" firstStartedPulling="2025-11-29 07:28:20.583185308 +0000 UTC m=+791.136715207" lastFinishedPulling="2025-11-29 07:28:23.34410419 +0000 UTC m=+793.897634099" observedRunningTime="2025-11-29 07:28:23.711585135 +0000 UTC m=+794.265115054" watchObservedRunningTime="2025-11-29 07:28:23.719178163 +0000 UTC m=+794.272708072" Nov 29 07:28:23 crc kubenswrapper[4660]: I1129 07:28:23.727871 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4hsk" podStartSLOduration=2.07286743 podStartE2EDuration="4.72782103s" podCreationTimestamp="2025-11-29 07:28:19 +0000 UTC" firstStartedPulling="2025-11-29 07:28:20.62049841 +0000 UTC m=+791.174028309" lastFinishedPulling="2025-11-29 07:28:23.275452 +0000 UTC m=+793.828981909" observedRunningTime="2025-11-29 07:28:23.72711403 +0000 UTC m=+794.280643949" watchObservedRunningTime="2025-11-29 07:28:23.72782103 +0000 UTC m=+794.281350929" Nov 29 07:28:24 crc kubenswrapper[4660]: I1129 07:28:24.886845 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" event={"ID":"fd8f5350-5025-49b7-85c6-5f7c1d5724a7","Type":"ContainerStarted","Data":"dbde4ec06cffe91225b739f22a50c468401fad19701a9de1c84ec51e0f4d73b8"} Nov 29 07:28:24 crc kubenswrapper[4660]: I1129 07:28:24.887166 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" Nov 29 07:28:24 crc kubenswrapper[4660]: I1129 07:28:24.902566 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" podStartSLOduration=1.292354552 podStartE2EDuration="4.902546086s" podCreationTimestamp="2025-11-29 07:28:20 +0000 UTC" firstStartedPulling="2025-11-29 07:28:20.68840343 +0000 UTC m=+791.241933329" lastFinishedPulling="2025-11-29 07:28:24.298594974 +0000 UTC m=+794.852124863" observedRunningTime="2025-11-29 07:28:24.899364179 +0000 UTC m=+795.452894088" watchObservedRunningTime="2025-11-29 07:28:24.902546086 +0000 UTC m=+795.456076005" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.189457 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgvps"] Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.190696 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-controller" containerID="cri-o://3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.190815 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="nbdb" containerID="cri-o://84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.190859 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.190933 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-node" containerID="cri-o://2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.190994 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-acl-logging" containerID="cri-o://2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.190842 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="northd" containerID="cri-o://a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.190994 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="sbdb" containerID="cri-o://cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.282105 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" containerID="cri-o://f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" gracePeriod=30 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.385827 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-b4x7w" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.588164 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/3.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.590358 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovn-acl-logging/0.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.590866 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovn-controller/0.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.591187 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640230 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tzbpk"] Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640448 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640462 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640472 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640479 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640495 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640502 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640511 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640518 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640526 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640532 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640544 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640550 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640559 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="northd" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640566 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="northd" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640579 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-acl-logging" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640587 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-acl-logging" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640599 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kubecfg-setup" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640628 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kubecfg-setup" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640639 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="nbdb" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640647 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="nbdb" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640660 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-node" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640667 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-node" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640680 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="sbdb" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640687 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="sbdb" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640797 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="sbdb" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640810 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640817 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640825 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640833 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="northd" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640843 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="nbdb" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640852 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640860 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-node" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640872 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="kube-rbac-proxy-ovn-metrics" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.640881 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovn-acl-logging" Nov 29 07:28:30 crc kubenswrapper[4660]: E1129 07:28:30.640996 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.641007 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.641116 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.641127 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerName="ovnkube-controller" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.642968 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673215 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-ovn\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673257 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-netns\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673290 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-log-socket\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673326 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-var-lib-openvswitch\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673335 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673356 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-kubelet\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673381 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-netd\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673416 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-env-overrides\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673376 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-log-socket" (OuterVolumeSpecName: "log-socket") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673423 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673468 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673437 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673400 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673422 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673536 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-ovn-kubernetes\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673567 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-slash\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673588 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-node-log\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673649 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szm8g\" (UniqueName: \"kubernetes.io/projected/01aa307a-c2ec-4ded-8677-da549fbfba76-kube-api-access-szm8g\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673691 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-etc-openvswitch\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673713 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01aa307a-c2ec-4ded-8677-da549fbfba76-ovn-node-metrics-cert\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673738 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-config\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673760 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-openvswitch\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673791 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-script-lib\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673830 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-bin\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673856 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673864 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-systemd-units\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673886 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-systemd\") pod \"01aa307a-c2ec-4ded-8677-da549fbfba76\" (UID: \"01aa307a-c2ec-4ded-8677-da549fbfba76\") " Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674076 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-node-log\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674121 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-slash\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674145 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-var-lib-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674169 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9cm\" (UniqueName: \"kubernetes.io/projected/3285043d-f2a7-4b15-9f2f-eac99952cc07-kube-api-access-7l9cm\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674212 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-ovn\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674283 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-cni-netd\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674312 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovnkube-config\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674341 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-run-ovn-kubernetes\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674389 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovn-node-metrics-cert\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674419 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-etc-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674439 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovnkube-script-lib\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674464 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-systemd\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674495 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-log-socket\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674525 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674556 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-kubelet\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674587 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-run-netns\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674633 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-env-overrides\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674676 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-cni-bin\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674716 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674744 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-systemd-units\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674841 4660 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674855 4660 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-log-socket\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674867 4660 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674881 4660 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674906 4660 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674918 4660 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.674929 4660 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673890 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673907 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673926 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-slash" (OuterVolumeSpecName: "host-slash") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.673942 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-node-log" (OuterVolumeSpecName: "node-log") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.675501 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.675506 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.675448 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.675726 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.675754 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.675381 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.681357 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01aa307a-c2ec-4ded-8677-da549fbfba76-kube-api-access-szm8g" (OuterVolumeSpecName: "kube-api-access-szm8g") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "kube-api-access-szm8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.682205 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aa307a-c2ec-4ded-8677-da549fbfba76-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.688037 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "01aa307a-c2ec-4ded-8677-da549fbfba76" (UID: "01aa307a-c2ec-4ded-8677-da549fbfba76"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776188 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776462 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-kubelet\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776319 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776512 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-run-netns\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776490 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-run-netns\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776537 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-kubelet\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776565 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-env-overrides\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776659 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-cni-bin\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776702 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776734 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-systemd-units\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776810 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-node-log\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776868 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-slash\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776901 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-var-lib-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776931 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l9cm\" (UniqueName: \"kubernetes.io/projected/3285043d-f2a7-4b15-9f2f-eac99952cc07-kube-api-access-7l9cm\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.776964 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-ovn\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777005 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-cni-netd\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777034 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovnkube-config\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777063 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-node-log\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777066 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-run-ovn-kubernetes\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777101 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovn-node-metrics-cert\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777103 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-run-ovn-kubernetes\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777117 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovnkube-script-lib\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777132 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-etc-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777145 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-systemd\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777147 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-slash\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777161 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-log-socket\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777187 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-var-lib-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777217 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-log-socket\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777519 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-cni-netd\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777593 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-ovn\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777194 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777668 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-env-overrides\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777675 4660 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777708 4660 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777721 4660 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777731 4660 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777743 4660 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777755 4660 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-host-slash\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777764 4660 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-node-log\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777774 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szm8g\" (UniqueName: \"kubernetes.io/projected/01aa307a-c2ec-4ded-8677-da549fbfba76-kube-api-access-szm8g\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777784 4660 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777794 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01aa307a-c2ec-4ded-8677-da549fbfba76-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777806 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/01aa307a-c2ec-4ded-8677-da549fbfba76-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777817 4660 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/01aa307a-c2ec-4ded-8677-da549fbfba76-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.777979 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-cni-bin\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.778030 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.778074 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-systemd-units\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.778115 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-etc-openvswitch\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.778165 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovnkube-config\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.778231 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3285043d-f2a7-4b15-9f2f-eac99952cc07-run-systemd\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.779020 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovnkube-script-lib\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.781204 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3285043d-f2a7-4b15-9f2f-eac99952cc07-ovn-node-metrics-cert\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.800998 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l9cm\" (UniqueName: \"kubernetes.io/projected/3285043d-f2a7-4b15-9f2f-eac99952cc07-kube-api-access-7l9cm\") pod \"ovnkube-node-tzbpk\" (UID: \"3285043d-f2a7-4b15-9f2f-eac99952cc07\") " pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.926754 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/2.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.927172 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/1.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.927219 4660 generic.go:334] "Generic (PLEG): container finished" podID="e71cb583-cccf-4345-8695-0d3a6c237a35" containerID="ef03925e6b8c552fb905d516efb63d1ac89f995971d89cd6413d64325fc6ff3f" exitCode=2 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.927281 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerDied","Data":"ef03925e6b8c552fb905d516efb63d1ac89f995971d89cd6413d64325fc6ff3f"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.927324 4660 scope.go:117] "RemoveContainer" containerID="f85042e0c44e8f32c3c38d09837040d9f7f54c59e7de18b30aca2f50d597e4d3" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.928111 4660 scope.go:117] "RemoveContainer" containerID="ef03925e6b8c552fb905d516efb63d1ac89f995971d89cd6413d64325fc6ff3f" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.930256 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovnkube-controller/3.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.943723 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovn-acl-logging/0.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.944570 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgvps_01aa307a-c2ec-4ded-8677-da549fbfba76/ovn-controller/0.log" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945284 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" exitCode=0 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945321 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" exitCode=0 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945332 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" exitCode=0 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945344 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" exitCode=0 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945354 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" exitCode=0 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945363 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" exitCode=0 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945370 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" exitCode=143 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945378 4660 generic.go:334] "Generic (PLEG): container finished" podID="01aa307a-c2ec-4ded-8677-da549fbfba76" containerID="3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" exitCode=143 Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945403 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945435 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945453 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945465 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945476 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945488 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945501 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945513 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945519 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945525 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945532 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945538 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945545 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945554 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945560 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945562 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.945567 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946149 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946166 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946175 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946182 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946190 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946196 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946202 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946207 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946213 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946219 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946225 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946234 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946244 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946251 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946257 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946263 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946269 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946275 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946283 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946291 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946297 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946303 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946311 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgvps" event={"ID":"01aa307a-c2ec-4ded-8677-da549fbfba76","Type":"ContainerDied","Data":"0097a6aa4cab3a22e09a1bb5a3dcc2228565b3b4e7aa8ddf403f0cfd96815434"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946323 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946332 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946339 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946349 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946357 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946364 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946371 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946377 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946383 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.946390 4660 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.953823 4660 scope.go:117] "RemoveContainer" containerID="f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.954193 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:30 crc kubenswrapper[4660]: I1129 07:28:30.974501 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.000775 4660 scope.go:117] "RemoveContainer" containerID="cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.006368 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgvps"] Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.016058 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgvps"] Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.024602 4660 scope.go:117] "RemoveContainer" containerID="84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.041332 4660 scope.go:117] "RemoveContainer" containerID="a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.059588 4660 scope.go:117] "RemoveContainer" containerID="178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.089880 4660 scope.go:117] "RemoveContainer" containerID="2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.103911 4660 scope.go:117] "RemoveContainer" containerID="2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.123653 4660 scope.go:117] "RemoveContainer" containerID="3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.166092 4660 scope.go:117] "RemoveContainer" containerID="93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.177854 4660 scope.go:117] "RemoveContainer" containerID="f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.178248 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": container with ID starting with f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c not found: ID does not exist" containerID="f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.178281 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} err="failed to get container status \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": rpc error: code = NotFound desc = could not find container \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": container with ID starting with f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.178304 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.178570 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": container with ID starting with fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35 not found: ID does not exist" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.178594 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} err="failed to get container status \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": rpc error: code = NotFound desc = could not find container \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": container with ID starting with fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.178624 4660 scope.go:117] "RemoveContainer" containerID="cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.178862 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": container with ID starting with cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872 not found: ID does not exist" containerID="cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.178886 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} err="failed to get container status \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": rpc error: code = NotFound desc = could not find container \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": container with ID starting with cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.178898 4660 scope.go:117] "RemoveContainer" containerID="84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.179243 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": container with ID starting with 84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad not found: ID does not exist" containerID="84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.179270 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} err="failed to get container status \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": rpc error: code = NotFound desc = could not find container \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": container with ID starting with 84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.179291 4660 scope.go:117] "RemoveContainer" containerID="a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.179529 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": container with ID starting with a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b not found: ID does not exist" containerID="a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.179555 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} err="failed to get container status \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": rpc error: code = NotFound desc = could not find container \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": container with ID starting with a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.179572 4660 scope.go:117] "RemoveContainer" containerID="178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.179931 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": container with ID starting with 178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49 not found: ID does not exist" containerID="178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.179956 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} err="failed to get container status \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": rpc error: code = NotFound desc = could not find container \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": container with ID starting with 178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.179973 4660 scope.go:117] "RemoveContainer" containerID="2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.180186 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": container with ID starting with 2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433 not found: ID does not exist" containerID="2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.180211 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} err="failed to get container status \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": rpc error: code = NotFound desc = could not find container \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": container with ID starting with 2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.180232 4660 scope.go:117] "RemoveContainer" containerID="2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.180509 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": container with ID starting with 2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0 not found: ID does not exist" containerID="2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.180535 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} err="failed to get container status \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": rpc error: code = NotFound desc = could not find container \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": container with ID starting with 2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.180553 4660 scope.go:117] "RemoveContainer" containerID="3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.180839 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": container with ID starting with 3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8 not found: ID does not exist" containerID="3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.180864 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} err="failed to get container status \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": rpc error: code = NotFound desc = could not find container \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": container with ID starting with 3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.180880 4660 scope.go:117] "RemoveContainer" containerID="93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31" Nov 29 07:28:31 crc kubenswrapper[4660]: E1129 07:28:31.181100 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": container with ID starting with 93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31 not found: ID does not exist" containerID="93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181124 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} err="failed to get container status \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": rpc error: code = NotFound desc = could not find container \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": container with ID starting with 93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181136 4660 scope.go:117] "RemoveContainer" containerID="f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181456 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} err="failed to get container status \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": rpc error: code = NotFound desc = could not find container \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": container with ID starting with f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181475 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181685 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} err="failed to get container status \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": rpc error: code = NotFound desc = could not find container \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": container with ID starting with fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181710 4660 scope.go:117] "RemoveContainer" containerID="cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181928 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} err="failed to get container status \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": rpc error: code = NotFound desc = could not find container \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": container with ID starting with cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.181951 4660 scope.go:117] "RemoveContainer" containerID="84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.182302 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} err="failed to get container status \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": rpc error: code = NotFound desc = could not find container \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": container with ID starting with 84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.182327 4660 scope.go:117] "RemoveContainer" containerID="a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183033 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} err="failed to get container status \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": rpc error: code = NotFound desc = could not find container \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": container with ID starting with a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183054 4660 scope.go:117] "RemoveContainer" containerID="178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183274 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} err="failed to get container status \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": rpc error: code = NotFound desc = could not find container \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": container with ID starting with 178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183296 4660 scope.go:117] "RemoveContainer" containerID="2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183581 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} err="failed to get container status \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": rpc error: code = NotFound desc = could not find container \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": container with ID starting with 2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183600 4660 scope.go:117] "RemoveContainer" containerID="2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183849 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} err="failed to get container status \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": rpc error: code = NotFound desc = could not find container \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": container with ID starting with 2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.183872 4660 scope.go:117] "RemoveContainer" containerID="3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.184185 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} err="failed to get container status \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": rpc error: code = NotFound desc = could not find container \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": container with ID starting with 3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.184210 4660 scope.go:117] "RemoveContainer" containerID="93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.184423 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} err="failed to get container status \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": rpc error: code = NotFound desc = could not find container \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": container with ID starting with 93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.184448 4660 scope.go:117] "RemoveContainer" containerID="f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.184768 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} err="failed to get container status \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": rpc error: code = NotFound desc = could not find container \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": container with ID starting with f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.184791 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.184995 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} err="failed to get container status \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": rpc error: code = NotFound desc = could not find container \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": container with ID starting with fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185017 4660 scope.go:117] "RemoveContainer" containerID="cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185214 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} err="failed to get container status \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": rpc error: code = NotFound desc = could not find container \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": container with ID starting with cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185232 4660 scope.go:117] "RemoveContainer" containerID="84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185412 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} err="failed to get container status \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": rpc error: code = NotFound desc = could not find container \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": container with ID starting with 84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185424 4660 scope.go:117] "RemoveContainer" containerID="a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185582 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} err="failed to get container status \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": rpc error: code = NotFound desc = could not find container \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": container with ID starting with a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185600 4660 scope.go:117] "RemoveContainer" containerID="178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185843 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} err="failed to get container status \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": rpc error: code = NotFound desc = could not find container \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": container with ID starting with 178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.185861 4660 scope.go:117] "RemoveContainer" containerID="2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.186074 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} err="failed to get container status \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": rpc error: code = NotFound desc = could not find container \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": container with ID starting with 2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.186095 4660 scope.go:117] "RemoveContainer" containerID="2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.186489 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} err="failed to get container status \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": rpc error: code = NotFound desc = could not find container \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": container with ID starting with 2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.186521 4660 scope.go:117] "RemoveContainer" containerID="3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.186777 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} err="failed to get container status \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": rpc error: code = NotFound desc = could not find container \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": container with ID starting with 3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.186803 4660 scope.go:117] "RemoveContainer" containerID="93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187037 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} err="failed to get container status \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": rpc error: code = NotFound desc = could not find container \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": container with ID starting with 93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187060 4660 scope.go:117] "RemoveContainer" containerID="f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187356 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c"} err="failed to get container status \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": rpc error: code = NotFound desc = could not find container \"f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c\": container with ID starting with f73a3de7ac9c1e68c2b513ccf65461d346916fcd8806c72f8422a40131804b8c not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187374 4660 scope.go:117] "RemoveContainer" containerID="fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187579 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35"} err="failed to get container status \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": rpc error: code = NotFound desc = could not find container \"fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35\": container with ID starting with fd74f892d18a997e028249bcde6c983e3d73cd635daef5c285c9155b18037b35 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187596 4660 scope.go:117] "RemoveContainer" containerID="cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187912 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872"} err="failed to get container status \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": rpc error: code = NotFound desc = could not find container \"cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872\": container with ID starting with cf3498dca00e18a53abed822e639c7c2bf989f70b0235b5aacffe2011ef23872 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.187928 4660 scope.go:117] "RemoveContainer" containerID="84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.188161 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad"} err="failed to get container status \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": rpc error: code = NotFound desc = could not find container \"84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad\": container with ID starting with 84bbae63da343610a518ee043a113da67d022864972d00ff52d9c840f031a2ad not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.188186 4660 scope.go:117] "RemoveContainer" containerID="a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.188402 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b"} err="failed to get container status \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": rpc error: code = NotFound desc = could not find container \"a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b\": container with ID starting with a998f899863ca0366b1d017ad57d15dbf8da4fc4eacef4182019df9c209c6b4b not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.188427 4660 scope.go:117] "RemoveContainer" containerID="178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.188758 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49"} err="failed to get container status \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": rpc error: code = NotFound desc = could not find container \"178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49\": container with ID starting with 178d3a8618d43588297ac0103fd7ce95b75dea0f1e267c517de8abc52de6eb49 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.188782 4660 scope.go:117] "RemoveContainer" containerID="2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.188981 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433"} err="failed to get container status \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": rpc error: code = NotFound desc = could not find container \"2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433\": container with ID starting with 2372f56762fa7a535104b1bbf2bdce20570b0d4a52d4a5f939c5b1cf225ea433 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.189006 4660 scope.go:117] "RemoveContainer" containerID="2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.189268 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0"} err="failed to get container status \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": rpc error: code = NotFound desc = could not find container \"2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0\": container with ID starting with 2cea4546b228e3be39873953ba10b7f07e2b2cec746461de917d25b038fc8eb0 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.189287 4660 scope.go:117] "RemoveContainer" containerID="3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.189519 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8"} err="failed to get container status \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": rpc error: code = NotFound desc = could not find container \"3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8\": container with ID starting with 3a5ebceb9e9b42769348273108659f2d21e3a41647043e22a38a74312a1604c8 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.189536 4660 scope.go:117] "RemoveContainer" containerID="93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.189800 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31"} err="failed to get container status \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": rpc error: code = NotFound desc = could not find container \"93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31\": container with ID starting with 93b9932b04ef6a7e322af038fd03d4f7343f7099b802abfdf4c4912419001e31 not found: ID does not exist" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.702643 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01aa307a-c2ec-4ded-8677-da549fbfba76" path="/var/lib/kubelet/pods/01aa307a-c2ec-4ded-8677-da549fbfba76/volumes" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.951575 4660 generic.go:334] "Generic (PLEG): container finished" podID="3285043d-f2a7-4b15-9f2f-eac99952cc07" containerID="661c794ca9c084f49b4f362ab09e9ce5eb8119106f9fedc343edd3b07bba6f5f" exitCode=0 Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.951644 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerDied","Data":"661c794ca9c084f49b4f362ab09e9ce5eb8119106f9fedc343edd3b07bba6f5f"} Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.951681 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"377e737144a52878fb520aa8ab873bcf6d5c92d960a6e165b200a208ddd032d9"} Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.954505 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99mtq_e71cb583-cccf-4345-8695-0d3a6c237a35/kube-multus/2.log" Nov 29 07:28:31 crc kubenswrapper[4660]: I1129 07:28:31.954561 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99mtq" event={"ID":"e71cb583-cccf-4345-8695-0d3a6c237a35","Type":"ContainerStarted","Data":"9d72680721c31a823849ccc24690410f619a04ed1fefcebca6d273ccebe911a6"} Nov 29 07:28:32 crc kubenswrapper[4660]: I1129 07:28:32.962196 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"9028549bbddbf460e0642a5fb33163ddb5d73571d4fb895fb3d717f415a541cf"} Nov 29 07:28:32 crc kubenswrapper[4660]: I1129 07:28:32.962544 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"a0bb377fe2a8cf60f2605fa222893935c1ad0eb4e8def9d1807b29d19b800cda"} Nov 29 07:28:32 crc kubenswrapper[4660]: I1129 07:28:32.962560 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"0845d87db73d6a982845a1f8c92d57d3501fb6c0705dc211a5dc684208de4d14"} Nov 29 07:28:32 crc kubenswrapper[4660]: I1129 07:28:32.962570 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"47fc3ce0f1164b10ef816102d70ec54325229538075a76aaf61aa9a9f17146e0"} Nov 29 07:28:32 crc kubenswrapper[4660]: I1129 07:28:32.962582 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"3eb225243e7f05d871475c1218fc2a43c4ab7f4bfab4abf184142cb38691040b"} Nov 29 07:28:32 crc kubenswrapper[4660]: I1129 07:28:32.962592 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"6fd8259fb2481c7057697515ef2b9e2dfe7a36d02314d09165649c6c83a888a0"} Nov 29 07:28:34 crc kubenswrapper[4660]: I1129 07:28:34.975556 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"dd83f7c664ca6bc8fe859706d6dd832f2cba7ac4620e0779e35cf7b87bbb3dc3"} Nov 29 07:28:35 crc kubenswrapper[4660]: I1129 07:28:35.500917 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:28:35 crc kubenswrapper[4660]: I1129 07:28:35.501000 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:28:37 crc kubenswrapper[4660]: I1129 07:28:37.997737 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" event={"ID":"3285043d-f2a7-4b15-9f2f-eac99952cc07","Type":"ContainerStarted","Data":"a549029e226c43c62821f103e2855aa84f2351f84c8d57aa12fcfb31ebead1c7"} Nov 29 07:28:40 crc kubenswrapper[4660]: I1129 07:28:40.008699 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:40 crc kubenswrapper[4660]: I1129 07:28:40.008747 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:40 crc kubenswrapper[4660]: I1129 07:28:40.008757 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:40 crc kubenswrapper[4660]: I1129 07:28:40.036313 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:40 crc kubenswrapper[4660]: I1129 07:28:40.043565 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" podStartSLOduration=10.043548709 podStartE2EDuration="10.043548709s" podCreationTimestamp="2025-11-29 07:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:28:40.03995384 +0000 UTC m=+810.593483759" watchObservedRunningTime="2025-11-29 07:28:40.043548709 +0000 UTC m=+810.597078618" Nov 29 07:28:40 crc kubenswrapper[4660]: I1129 07:28:40.049286 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:28:42 crc kubenswrapper[4660]: I1129 07:28:42.037626 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tzbpk" Nov 29 07:29:05 crc kubenswrapper[4660]: I1129 07:29:05.503165 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:29:05 crc kubenswrapper[4660]: I1129 07:29:05.503756 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:29:05 crc kubenswrapper[4660]: I1129 07:29:05.503801 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:29:05 crc kubenswrapper[4660]: I1129 07:29:05.504331 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba7bcb77e4d299d679fd34242a1b77b4792c3db7cdb7365569436d0dd85e0583"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:29:05 crc kubenswrapper[4660]: I1129 07:29:05.504388 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://ba7bcb77e4d299d679fd34242a1b77b4792c3db7cdb7365569436d0dd85e0583" gracePeriod=600 Nov 29 07:29:06 crc kubenswrapper[4660]: I1129 07:29:06.150100 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="ba7bcb77e4d299d679fd34242a1b77b4792c3db7cdb7365569436d0dd85e0583" exitCode=0 Nov 29 07:29:06 crc kubenswrapper[4660]: I1129 07:29:06.150152 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"ba7bcb77e4d299d679fd34242a1b77b4792c3db7cdb7365569436d0dd85e0583"} Nov 29 07:29:06 crc kubenswrapper[4660]: I1129 07:29:06.150532 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"dcd84865061a683fd99b3d22cec95cee8b6991ac454110033b3fc10f47f460b1"} Nov 29 07:29:06 crc kubenswrapper[4660]: I1129 07:29:06.150554 4660 scope.go:117] "RemoveContainer" containerID="21fcfde41d8f0eb843a0ec8e8ff710f45a74b90e8f4c10514aff34edbeca29d5" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.297089 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km"] Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.298718 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.300327 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.320177 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km"] Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.405407 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtprm\" (UniqueName: \"kubernetes.io/projected/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-kube-api-access-xtprm\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.405462 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.405529 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.507124 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.507216 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtprm\" (UniqueName: \"kubernetes.io/projected/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-kube-api-access-xtprm\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.507256 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.507793 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.507852 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.534154 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtprm\" (UniqueName: \"kubernetes.io/projected/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-kube-api-access-xtprm\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:14 crc kubenswrapper[4660]: I1129 07:29:14.611543 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:15 crc kubenswrapper[4660]: I1129 07:29:15.013157 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km"] Nov 29 07:29:15 crc kubenswrapper[4660]: W1129 07:29:15.024430 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce7c0bf6_a2b1_40a0_b4bb_997251bda272.slice/crio-15fc95ae3587f5d24820994f89a23a2745e73b48734d83b69cbb02648975574f WatchSource:0}: Error finding container 15fc95ae3587f5d24820994f89a23a2745e73b48734d83b69cbb02648975574f: Status 404 returned error can't find the container with id 15fc95ae3587f5d24820994f89a23a2745e73b48734d83b69cbb02648975574f Nov 29 07:29:15 crc kubenswrapper[4660]: I1129 07:29:15.211521 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" event={"ID":"ce7c0bf6-a2b1-40a0-b4bb-997251bda272","Type":"ContainerStarted","Data":"f676bf88bf45a2434d6a4fbc3ab507c9270df68afc9c04f3404a7a60290816e5"} Nov 29 07:29:15 crc kubenswrapper[4660]: I1129 07:29:15.211808 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" event={"ID":"ce7c0bf6-a2b1-40a0-b4bb-997251bda272","Type":"ContainerStarted","Data":"15fc95ae3587f5d24820994f89a23a2745e73b48734d83b69cbb02648975574f"} Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.219263 4660 generic.go:334] "Generic (PLEG): container finished" podID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerID="f676bf88bf45a2434d6a4fbc3ab507c9270df68afc9c04f3404a7a60290816e5" exitCode=0 Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.219571 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" event={"ID":"ce7c0bf6-a2b1-40a0-b4bb-997251bda272","Type":"ContainerDied","Data":"f676bf88bf45a2434d6a4fbc3ab507c9270df68afc9c04f3404a7a60290816e5"} Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.565701 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bh6tl"] Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.567379 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.589852 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bh6tl"] Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.735924 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-utilities\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.736109 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkg7\" (UniqueName: \"kubernetes.io/projected/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-kube-api-access-wxkg7\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.736230 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-catalog-content\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.836994 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-utilities\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.837108 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkg7\" (UniqueName: \"kubernetes.io/projected/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-kube-api-access-wxkg7\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.837178 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-catalog-content\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.838091 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-utilities\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.838271 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-catalog-content\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.857128 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkg7\" (UniqueName: \"kubernetes.io/projected/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-kube-api-access-wxkg7\") pod \"redhat-operators-bh6tl\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:16 crc kubenswrapper[4660]: I1129 07:29:16.897370 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:17 crc kubenswrapper[4660]: I1129 07:29:17.106735 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bh6tl"] Nov 29 07:29:17 crc kubenswrapper[4660]: W1129 07:29:17.119085 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8f9ae1_75b8_442d_a4a7_b39d373f54a7.slice/crio-e7ba151095317ceebbf2ff4d6d1ea437d83d2482f23f2f199ff7847b108343d7 WatchSource:0}: Error finding container e7ba151095317ceebbf2ff4d6d1ea437d83d2482f23f2f199ff7847b108343d7: Status 404 returned error can't find the container with id e7ba151095317ceebbf2ff4d6d1ea437d83d2482f23f2f199ff7847b108343d7 Nov 29 07:29:17 crc kubenswrapper[4660]: I1129 07:29:17.224973 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bh6tl" event={"ID":"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7","Type":"ContainerStarted","Data":"e7ba151095317ceebbf2ff4d6d1ea437d83d2482f23f2f199ff7847b108343d7"} Nov 29 07:29:18 crc kubenswrapper[4660]: I1129 07:29:18.232431 4660 generic.go:334] "Generic (PLEG): container finished" podID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerID="6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f" exitCode=0 Nov 29 07:29:18 crc kubenswrapper[4660]: I1129 07:29:18.232486 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bh6tl" event={"ID":"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7","Type":"ContainerDied","Data":"6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f"} Nov 29 07:29:18 crc kubenswrapper[4660]: I1129 07:29:18.235999 4660 generic.go:334] "Generic (PLEG): container finished" podID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerID="e430852a581d4da9e7c4fedee681ad3765fd17a5aade3b22bd688267ceeb6a4e" exitCode=0 Nov 29 07:29:18 crc kubenswrapper[4660]: I1129 07:29:18.236047 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" event={"ID":"ce7c0bf6-a2b1-40a0-b4bb-997251bda272","Type":"ContainerDied","Data":"e430852a581d4da9e7c4fedee681ad3765fd17a5aade3b22bd688267ceeb6a4e"} Nov 29 07:29:19 crc kubenswrapper[4660]: I1129 07:29:19.245480 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bh6tl" event={"ID":"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7","Type":"ContainerStarted","Data":"76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef"} Nov 29 07:29:19 crc kubenswrapper[4660]: I1129 07:29:19.249400 4660 generic.go:334] "Generic (PLEG): container finished" podID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerID="3e2870d3843e61522e30287e0b8eb32c4a4b7998dbe5a8c85b23c9e650eda907" exitCode=0 Nov 29 07:29:19 crc kubenswrapper[4660]: I1129 07:29:19.249461 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" event={"ID":"ce7c0bf6-a2b1-40a0-b4bb-997251bda272","Type":"ContainerDied","Data":"3e2870d3843e61522e30287e0b8eb32c4a4b7998dbe5a8c85b23c9e650eda907"} Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.258006 4660 generic.go:334] "Generic (PLEG): container finished" podID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerID="76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef" exitCode=0 Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.258107 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bh6tl" event={"ID":"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7","Type":"ContainerDied","Data":"76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef"} Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.502233 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.685300 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtprm\" (UniqueName: \"kubernetes.io/projected/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-kube-api-access-xtprm\") pod \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.685346 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-util\") pod \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.685499 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-bundle\") pod \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\" (UID: \"ce7c0bf6-a2b1-40a0-b4bb-997251bda272\") " Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.686074 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-bundle" (OuterVolumeSpecName: "bundle") pod "ce7c0bf6-a2b1-40a0-b4bb-997251bda272" (UID: "ce7c0bf6-a2b1-40a0-b4bb-997251bda272"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.691770 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-kube-api-access-xtprm" (OuterVolumeSpecName: "kube-api-access-xtprm") pod "ce7c0bf6-a2b1-40a0-b4bb-997251bda272" (UID: "ce7c0bf6-a2b1-40a0-b4bb-997251bda272"). InnerVolumeSpecName "kube-api-access-xtprm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.695787 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-util" (OuterVolumeSpecName: "util") pod "ce7c0bf6-a2b1-40a0-b4bb-997251bda272" (UID: "ce7c0bf6-a2b1-40a0-b4bb-997251bda272"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.786562 4660 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.786592 4660 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:20 crc kubenswrapper[4660]: I1129 07:29:20.786603 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtprm\" (UniqueName: \"kubernetes.io/projected/ce7c0bf6-a2b1-40a0-b4bb-997251bda272-kube-api-access-xtprm\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:21 crc kubenswrapper[4660]: I1129 07:29:21.268800 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" Nov 29 07:29:21 crc kubenswrapper[4660]: I1129 07:29:21.268763 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km" event={"ID":"ce7c0bf6-a2b1-40a0-b4bb-997251bda272","Type":"ContainerDied","Data":"15fc95ae3587f5d24820994f89a23a2745e73b48734d83b69cbb02648975574f"} Nov 29 07:29:21 crc kubenswrapper[4660]: I1129 07:29:21.268956 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15fc95ae3587f5d24820994f89a23a2745e73b48734d83b69cbb02648975574f" Nov 29 07:29:22 crc kubenswrapper[4660]: I1129 07:29:22.277272 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bh6tl" event={"ID":"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7","Type":"ContainerStarted","Data":"36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d"} Nov 29 07:29:22 crc kubenswrapper[4660]: I1129 07:29:22.300696 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bh6tl" podStartSLOduration=2.58905302 podStartE2EDuration="6.30067733s" podCreationTimestamp="2025-11-29 07:29:16 +0000 UTC" firstStartedPulling="2025-11-29 07:29:18.235931151 +0000 UTC m=+848.789461050" lastFinishedPulling="2025-11-29 07:29:21.947555421 +0000 UTC m=+852.501085360" observedRunningTime="2025-11-29 07:29:22.29996275 +0000 UTC m=+852.853492649" watchObservedRunningTime="2025-11-29 07:29:22.30067733 +0000 UTC m=+852.854207239" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.143259 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd"] Nov 29 07:29:26 crc kubenswrapper[4660]: E1129 07:29:26.144498 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerName="extract" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.144560 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerName="extract" Nov 29 07:29:26 crc kubenswrapper[4660]: E1129 07:29:26.144684 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerName="pull" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.144734 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerName="pull" Nov 29 07:29:26 crc kubenswrapper[4660]: E1129 07:29:26.144862 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerName="util" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.144911 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerName="util" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.145064 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7c0bf6-a2b1-40a0-b4bb-997251bda272" containerName="extract" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.145468 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.147196 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.147791 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9xmq8" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.147811 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.169521 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd"] Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.253193 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg4cc\" (UniqueName: \"kubernetes.io/projected/8355ccfb-5f01-461d-9aca-89e61881e1d2-kube-api-access-rg4cc\") pod \"nmstate-operator-5b5b58f5c8-bcpsd\" (UID: \"8355ccfb-5f01-461d-9aca-89e61881e1d2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.354281 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg4cc\" (UniqueName: \"kubernetes.io/projected/8355ccfb-5f01-461d-9aca-89e61881e1d2-kube-api-access-rg4cc\") pod \"nmstate-operator-5b5b58f5c8-bcpsd\" (UID: \"8355ccfb-5f01-461d-9aca-89e61881e1d2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.378872 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg4cc\" (UniqueName: \"kubernetes.io/projected/8355ccfb-5f01-461d-9aca-89e61881e1d2-kube-api-access-rg4cc\") pod \"nmstate-operator-5b5b58f5c8-bcpsd\" (UID: \"8355ccfb-5f01-461d-9aca-89e61881e1d2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.460376 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.664446 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd"] Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.897859 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:26 crc kubenswrapper[4660]: I1129 07:29:26.898175 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:27 crc kubenswrapper[4660]: I1129 07:29:27.301814 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" event={"ID":"8355ccfb-5f01-461d-9aca-89e61881e1d2","Type":"ContainerStarted","Data":"37eaa12a54262ec7cf47450a3e16c95d7ef0a288308d9f5d65a4abbf17e50991"} Nov 29 07:29:27 crc kubenswrapper[4660]: I1129 07:29:27.934466 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bh6tl" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="registry-server" probeResult="failure" output=< Nov 29 07:29:27 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:29:27 crc kubenswrapper[4660]: > Nov 29 07:29:33 crc kubenswrapper[4660]: I1129 07:29:33.335249 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" event={"ID":"8355ccfb-5f01-461d-9aca-89e61881e1d2","Type":"ContainerStarted","Data":"e2b01a757e2582ec30bc48ba4cddaefbd3c0a3cf4efebe8aca7e8737a8d276f6"} Nov 29 07:29:33 crc kubenswrapper[4660]: I1129 07:29:33.357160 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-bcpsd" podStartSLOduration=1.262906289 podStartE2EDuration="7.357135888s" podCreationTimestamp="2025-11-29 07:29:26 +0000 UTC" firstStartedPulling="2025-11-29 07:29:26.673243657 +0000 UTC m=+857.226773556" lastFinishedPulling="2025-11-29 07:29:32.767473256 +0000 UTC m=+863.321003155" observedRunningTime="2025-11-29 07:29:33.354007522 +0000 UTC m=+863.907537421" watchObservedRunningTime="2025-11-29 07:29:33.357135888 +0000 UTC m=+863.910665787" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.826711 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb"] Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.827999 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.830903 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-x8tmd" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.848898 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np"] Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.849516 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.856581 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.890246 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np"] Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.901654 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hzjdq"] Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.902276 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.936955 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb"] Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.962095 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hd4\" (UniqueName: \"kubernetes.io/projected/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-kube-api-access-22hd4\") pod \"nmstate-webhook-5f6d4c5ccb-ds7np\" (UID: \"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.962163 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dhs\" (UniqueName: \"kubernetes.io/projected/abb40e0e-8d39-4ede-a762-2968c5ae46a1-kube-api-access-d8dhs\") pod \"nmstate-metrics-7f946cbc9-gxczb\" (UID: \"abb40e0e-8d39-4ede-a762-2968c5ae46a1\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" Nov 29 07:29:34 crc kubenswrapper[4660]: I1129 07:29:34.962198 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-ds7np\" (UID: \"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.016925 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5"] Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.017656 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.022573 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nbmvc" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.022879 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.026516 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5"] Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.029691 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.063827 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-ovs-socket\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.063890 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22hd4\" (UniqueName: \"kubernetes.io/projected/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-kube-api-access-22hd4\") pod \"nmstate-webhook-5f6d4c5ccb-ds7np\" (UID: \"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.063918 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bd6\" (UniqueName: \"kubernetes.io/projected/00b98def-8412-4510-a607-30ea7c13600d-kube-api-access-z8bd6\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.063953 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dhs\" (UniqueName: \"kubernetes.io/projected/abb40e0e-8d39-4ede-a762-2968c5ae46a1-kube-api-access-d8dhs\") pod \"nmstate-metrics-7f946cbc9-gxczb\" (UID: \"abb40e0e-8d39-4ede-a762-2968c5ae46a1\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.063984 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-ds7np\" (UID: \"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.064008 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-dbus-socket\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.064073 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-nmstate-lock\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: E1129 07:29:35.064324 4660 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 29 07:29:35 crc kubenswrapper[4660]: E1129 07:29:35.064383 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-tls-key-pair podName:c3aaf1b2-a146-43cd-91ab-8ee65cff6e44 nodeName:}" failed. No retries permitted until 2025-11-29 07:29:35.564367971 +0000 UTC m=+866.117897870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-ds7np" (UID: "c3aaf1b2-a146-43cd-91ab-8ee65cff6e44") : secret "openshift-nmstate-webhook" not found Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.083353 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22hd4\" (UniqueName: \"kubernetes.io/projected/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-kube-api-access-22hd4\") pod \"nmstate-webhook-5f6d4c5ccb-ds7np\" (UID: \"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.083407 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dhs\" (UniqueName: \"kubernetes.io/projected/abb40e0e-8d39-4ede-a762-2968c5ae46a1-kube-api-access-d8dhs\") pod \"nmstate-metrics-7f946cbc9-gxczb\" (UID: \"abb40e0e-8d39-4ede-a762-2968c5ae46a1\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.144989 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165405 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-ovs-socket\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165676 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bd6\" (UniqueName: \"kubernetes.io/projected/00b98def-8412-4510-a607-30ea7c13600d-kube-api-access-z8bd6\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165698 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1f69a645-8449-4c71-abdb-2d9a1413eae0-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165730 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwxg\" (UniqueName: \"kubernetes.io/projected/1f69a645-8449-4c71-abdb-2d9a1413eae0-kube-api-access-lrwxg\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165759 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-dbus-socket\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165781 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-nmstate-lock\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165814 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f69a645-8449-4c71-abdb-2d9a1413eae0-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.165889 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-ovs-socket\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.166291 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-dbus-socket\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.166315 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/00b98def-8412-4510-a607-30ea7c13600d-nmstate-lock\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.188013 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bd6\" (UniqueName: \"kubernetes.io/projected/00b98def-8412-4510-a607-30ea7c13600d-kube-api-access-z8bd6\") pod \"nmstate-handler-hzjdq\" (UID: \"00b98def-8412-4510-a607-30ea7c13600d\") " pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.218720 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:35 crc kubenswrapper[4660]: W1129 07:29:35.250058 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00b98def_8412_4510_a607_30ea7c13600d.slice/crio-c6f84a222f98e6f3d3db30306418b4fae160b9cd9d1913d1a253a2deb34d1624 WatchSource:0}: Error finding container c6f84a222f98e6f3d3db30306418b4fae160b9cd9d1913d1a253a2deb34d1624: Status 404 returned error can't find the container with id c6f84a222f98e6f3d3db30306418b4fae160b9cd9d1913d1a253a2deb34d1624 Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.267296 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f69a645-8449-4c71-abdb-2d9a1413eae0-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.267396 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1f69a645-8449-4c71-abdb-2d9a1413eae0-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: E1129 07:29:35.267425 4660 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 29 07:29:35 crc kubenswrapper[4660]: E1129 07:29:35.267485 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f69a645-8449-4c71-abdb-2d9a1413eae0-plugin-serving-cert podName:1f69a645-8449-4c71-abdb-2d9a1413eae0 nodeName:}" failed. No retries permitted until 2025-11-29 07:29:35.767468092 +0000 UTC m=+866.320997991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1f69a645-8449-4c71-abdb-2d9a1413eae0-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-54kd5" (UID: "1f69a645-8449-4c71-abdb-2d9a1413eae0") : secret "plugin-serving-cert" not found Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.267438 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrwxg\" (UniqueName: \"kubernetes.io/projected/1f69a645-8449-4c71-abdb-2d9a1413eae0-kube-api-access-lrwxg\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.268533 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1f69a645-8449-4c71-abdb-2d9a1413eae0-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.302296 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-676cb6754c-f2xsh"] Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.303015 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.306509 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrwxg\" (UniqueName: \"kubernetes.io/projected/1f69a645-8449-4c71-abdb-2d9a1413eae0-kube-api-access-lrwxg\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.334454 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-676cb6754c-f2xsh"] Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.368786 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hzjdq" event={"ID":"00b98def-8412-4510-a607-30ea7c13600d","Type":"ContainerStarted","Data":"c6f84a222f98e6f3d3db30306418b4fae160b9cd9d1913d1a253a2deb34d1624"} Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.417837 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb"] Nov 29 07:29:35 crc kubenswrapper[4660]: W1129 07:29:35.424200 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb40e0e_8d39_4ede_a762_2968c5ae46a1.slice/crio-ba59a0d19c37a3cf045d44fce4b13171e8a71e25824993c593a5dd00b56520aa WatchSource:0}: Error finding container ba59a0d19c37a3cf045d44fce4b13171e8a71e25824993c593a5dd00b56520aa: Status 404 returned error can't find the container with id ba59a0d19c37a3cf045d44fce4b13171e8a71e25824993c593a5dd00b56520aa Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.470532 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6gsl\" (UniqueName: \"kubernetes.io/projected/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-kube-api-access-x6gsl\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.470581 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-trusted-ca-bundle\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.470628 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-serving-cert\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.470652 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-oauth-config\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.470677 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-service-ca\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.470729 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-config\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.470745 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-oauth-serving-cert\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571536 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-serving-cert\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571580 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-oauth-config\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571600 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-service-ca\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571658 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-ds7np\" (UID: \"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571683 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-config\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571701 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-oauth-serving-cert\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571751 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6gsl\" (UniqueName: \"kubernetes.io/projected/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-kube-api-access-x6gsl\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.571776 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-trusted-ca-bundle\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.572533 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-config\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.572775 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-oauth-serving-cert\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.572822 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-trusted-ca-bundle\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.573085 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-service-ca\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.575454 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-oauth-config\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.575790 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3aaf1b2-a146-43cd-91ab-8ee65cff6e44-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-ds7np\" (UID: \"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.577951 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-console-serving-cert\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.590377 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6gsl\" (UniqueName: \"kubernetes.io/projected/3f112a1f-916b-4e6f-a1fa-7aa63eaa036f-kube-api-access-x6gsl\") pod \"console-676cb6754c-f2xsh\" (UID: \"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f\") " pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.663915 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.769309 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.773332 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f69a645-8449-4c71-abdb-2d9a1413eae0-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.778106 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f69a645-8449-4c71-abdb-2d9a1413eae0-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-54kd5\" (UID: \"1f69a645-8449-4c71-abdb-2d9a1413eae0\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.851715 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-676cb6754c-f2xsh"] Nov 29 07:29:35 crc kubenswrapper[4660]: W1129 07:29:35.865346 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f112a1f_916b_4e6f_a1fa_7aa63eaa036f.slice/crio-ad0989aa322c47a5d232f6bdf0f74a3f11c9d6df7878fb6a140b4cc965bdd1ba WatchSource:0}: Error finding container ad0989aa322c47a5d232f6bdf0f74a3f11c9d6df7878fb6a140b4cc965bdd1ba: Status 404 returned error can't find the container with id ad0989aa322c47a5d232f6bdf0f74a3f11c9d6df7878fb6a140b4cc965bdd1ba Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.934528 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" Nov 29 07:29:35 crc kubenswrapper[4660]: I1129 07:29:35.968272 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np"] Nov 29 07:29:35 crc kubenswrapper[4660]: W1129 07:29:35.978718 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3aaf1b2_a146_43cd_91ab_8ee65cff6e44.slice/crio-3677cf1e3ee89d4460a98fc237b7cf2af46d26519bd7d4ebc9d13aaa67bab9ff WatchSource:0}: Error finding container 3677cf1e3ee89d4460a98fc237b7cf2af46d26519bd7d4ebc9d13aaa67bab9ff: Status 404 returned error can't find the container with id 3677cf1e3ee89d4460a98fc237b7cf2af46d26519bd7d4ebc9d13aaa67bab9ff Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.375744 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" event={"ID":"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44","Type":"ContainerStarted","Data":"3677cf1e3ee89d4460a98fc237b7cf2af46d26519bd7d4ebc9d13aaa67bab9ff"} Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.377202 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-676cb6754c-f2xsh" event={"ID":"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f","Type":"ContainerStarted","Data":"409e3edd1aa0028618c8490744c9bc0a2e868b547356d16cd2d34150cca75d22"} Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.377262 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-676cb6754c-f2xsh" event={"ID":"3f112a1f-916b-4e6f-a1fa-7aa63eaa036f","Type":"ContainerStarted","Data":"ad0989aa322c47a5d232f6bdf0f74a3f11c9d6df7878fb6a140b4cc965bdd1ba"} Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.378871 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" event={"ID":"abb40e0e-8d39-4ede-a762-2968c5ae46a1","Type":"ContainerStarted","Data":"ba59a0d19c37a3cf045d44fce4b13171e8a71e25824993c593a5dd00b56520aa"} Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.393162 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-676cb6754c-f2xsh" podStartSLOduration=1.393140386 podStartE2EDuration="1.393140386s" podCreationTimestamp="2025-11-29 07:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:29:36.390938786 +0000 UTC m=+866.944468685" watchObservedRunningTime="2025-11-29 07:29:36.393140386 +0000 UTC m=+866.946670285" Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.502649 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5"] Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.941604 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:36 crc kubenswrapper[4660]: I1129 07:29:36.992391 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:37 crc kubenswrapper[4660]: I1129 07:29:37.172995 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bh6tl"] Nov 29 07:29:37 crc kubenswrapper[4660]: I1129 07:29:37.392800 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" event={"ID":"1f69a645-8449-4c71-abdb-2d9a1413eae0","Type":"ContainerStarted","Data":"29eae8232359c8df191ef293541932db4ef69e1807ab275080bf096e60cfa0a4"} Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.402482 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" event={"ID":"abb40e0e-8d39-4ede-a762-2968c5ae46a1","Type":"ContainerStarted","Data":"1d9cdcf54e88ee87596eb00d9606317e058ab52a2ed98bad20e2b9f8feae2b2d"} Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.408071 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" event={"ID":"c3aaf1b2-a146-43cd-91ab-8ee65cff6e44","Type":"ContainerStarted","Data":"e351fd55a321c25a53ad8e6a56e7cb9251d45baec5bb0dfbd8e1abcbc5bf9116"} Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.408119 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bh6tl" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="registry-server" containerID="cri-o://36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d" gracePeriod=2 Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.439886 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" podStartSLOduration=2.210591863 podStartE2EDuration="4.439867372s" podCreationTimestamp="2025-11-29 07:29:34 +0000 UTC" firstStartedPulling="2025-11-29 07:29:35.983843599 +0000 UTC m=+866.537373498" lastFinishedPulling="2025-11-29 07:29:38.213119108 +0000 UTC m=+868.766649007" observedRunningTime="2025-11-29 07:29:38.433177087 +0000 UTC m=+868.986706986" watchObservedRunningTime="2025-11-29 07:29:38.439867372 +0000 UTC m=+868.993397271" Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.749835 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.816798 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-utilities\") pod \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.816871 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg7\" (UniqueName: \"kubernetes.io/projected/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-kube-api-access-wxkg7\") pod \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.816964 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-catalog-content\") pod \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\" (UID: \"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7\") " Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.817564 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-utilities" (OuterVolumeSpecName: "utilities") pod "6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" (UID: "6e8f9ae1-75b8-442d-a4a7-b39d373f54a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.821968 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-kube-api-access-wxkg7" (OuterVolumeSpecName: "kube-api-access-wxkg7") pod "6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" (UID: "6e8f9ae1-75b8-442d-a4a7-b39d373f54a7"). InnerVolumeSpecName "kube-api-access-wxkg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.918507 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.918545 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg7\" (UniqueName: \"kubernetes.io/projected/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-kube-api-access-wxkg7\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:38 crc kubenswrapper[4660]: I1129 07:29:38.922096 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" (UID: "6e8f9ae1-75b8-442d-a4a7-b39d373f54a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.019948 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.422703 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hzjdq" event={"ID":"00b98def-8412-4510-a607-30ea7c13600d","Type":"ContainerStarted","Data":"aef7fbfa60938315250c0a8f25dd6acf595e403a42e74ab81148b06d76268190"} Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.424231 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.434750 4660 generic.go:334] "Generic (PLEG): container finished" podID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerID="36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d" exitCode=0 Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.435547 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bh6tl" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.437979 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bh6tl" event={"ID":"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7","Type":"ContainerDied","Data":"36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d"} Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.439799 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.439865 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bh6tl" event={"ID":"6e8f9ae1-75b8-442d-a4a7-b39d373f54a7","Type":"ContainerDied","Data":"e7ba151095317ceebbf2ff4d6d1ea437d83d2482f23f2f199ff7847b108343d7"} Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.439933 4660 scope.go:117] "RemoveContainer" containerID="36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.442270 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hzjdq" podStartSLOduration=2.504837217 podStartE2EDuration="5.442257186s" podCreationTimestamp="2025-11-29 07:29:34 +0000 UTC" firstStartedPulling="2025-11-29 07:29:35.261317973 +0000 UTC m=+865.814847872" lastFinishedPulling="2025-11-29 07:29:38.198737922 +0000 UTC m=+868.752267841" observedRunningTime="2025-11-29 07:29:39.43987003 +0000 UTC m=+869.993399949" watchObservedRunningTime="2025-11-29 07:29:39.442257186 +0000 UTC m=+869.995787095" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.499371 4660 scope.go:117] "RemoveContainer" containerID="76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.505596 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bh6tl"] Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.508788 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bh6tl"] Nov 29 07:29:39 crc kubenswrapper[4660]: E1129 07:29:39.511306 4660 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8f9ae1_75b8_442d_a4a7_b39d373f54a7.slice\": RecentStats: unable to find data in memory cache]" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.557060 4660 scope.go:117] "RemoveContainer" containerID="6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.578173 4660 scope.go:117] "RemoveContainer" containerID="36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d" Nov 29 07:29:39 crc kubenswrapper[4660]: E1129 07:29:39.578660 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d\": container with ID starting with 36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d not found: ID does not exist" containerID="36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.578704 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d"} err="failed to get container status \"36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d\": rpc error: code = NotFound desc = could not find container \"36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d\": container with ID starting with 36322a7568bdb5d02e9b104f9faa33deb66968fa0a052635b8dca0592be1111d not found: ID does not exist" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.578730 4660 scope.go:117] "RemoveContainer" containerID="76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef" Nov 29 07:29:39 crc kubenswrapper[4660]: E1129 07:29:39.579099 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef\": container with ID starting with 76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef not found: ID does not exist" containerID="76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.579364 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef"} err="failed to get container status \"76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef\": rpc error: code = NotFound desc = could not find container \"76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef\": container with ID starting with 76b80dcafbf797186e61960995fa6e6daa0d268380162d0bfd0e5d06b457f4ef not found: ID does not exist" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.579431 4660 scope.go:117] "RemoveContainer" containerID="6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f" Nov 29 07:29:39 crc kubenswrapper[4660]: E1129 07:29:39.579739 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f\": container with ID starting with 6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f not found: ID does not exist" containerID="6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.579782 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f"} err="failed to get container status \"6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f\": rpc error: code = NotFound desc = could not find container \"6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f\": container with ID starting with 6e2f34ad268552014f9187ff99c8cc9b4b33aaa4fc058b0cea285b17ceadb73f not found: ID does not exist" Nov 29 07:29:39 crc kubenswrapper[4660]: I1129 07:29:39.703256 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" path="/var/lib/kubelet/pods/6e8f9ae1-75b8-442d-a4a7-b39d373f54a7/volumes" Nov 29 07:29:40 crc kubenswrapper[4660]: I1129 07:29:40.444699 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" event={"ID":"1f69a645-8449-4c71-abdb-2d9a1413eae0","Type":"ContainerStarted","Data":"56740b0592340b43a25f4f73dbe60cd0b8813b1eba0adba063f44f983f73d38e"} Nov 29 07:29:40 crc kubenswrapper[4660]: I1129 07:29:40.463683 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-54kd5" podStartSLOduration=3.53544422 podStartE2EDuration="6.463663195s" podCreationTimestamp="2025-11-29 07:29:34 +0000 UTC" firstStartedPulling="2025-11-29 07:29:36.517455285 +0000 UTC m=+867.070985194" lastFinishedPulling="2025-11-29 07:29:39.44567427 +0000 UTC m=+869.999204169" observedRunningTime="2025-11-29 07:29:40.457635498 +0000 UTC m=+871.011165397" watchObservedRunningTime="2025-11-29 07:29:40.463663195 +0000 UTC m=+871.017193094" Nov 29 07:29:42 crc kubenswrapper[4660]: I1129 07:29:42.456224 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" event={"ID":"abb40e0e-8d39-4ede-a762-2968c5ae46a1","Type":"ContainerStarted","Data":"9bc8c7ed87c04af41e19800c8a3c6bd05442abcd7e439eb2bc017ed79f432e93"} Nov 29 07:29:42 crc kubenswrapper[4660]: I1129 07:29:42.479851 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gxczb" podStartSLOduration=2.374230426 podStartE2EDuration="8.479826428s" podCreationTimestamp="2025-11-29 07:29:34 +0000 UTC" firstStartedPulling="2025-11-29 07:29:35.427721882 +0000 UTC m=+865.981251781" lastFinishedPulling="2025-11-29 07:29:41.533317884 +0000 UTC m=+872.086847783" observedRunningTime="2025-11-29 07:29:42.475134087 +0000 UTC m=+873.028664027" watchObservedRunningTime="2025-11-29 07:29:42.479826428 +0000 UTC m=+873.033356337" Nov 29 07:29:45 crc kubenswrapper[4660]: I1129 07:29:45.243338 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hzjdq" Nov 29 07:29:45 crc kubenswrapper[4660]: I1129 07:29:45.664135 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:45 crc kubenswrapper[4660]: I1129 07:29:45.664204 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:45 crc kubenswrapper[4660]: I1129 07:29:45.669907 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:46 crc kubenswrapper[4660]: I1129 07:29:46.492539 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-676cb6754c-f2xsh" Nov 29 07:29:46 crc kubenswrapper[4660]: I1129 07:29:46.565421 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8qjn8"] Nov 29 07:29:55 crc kubenswrapper[4660]: I1129 07:29:55.774496 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.154549 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns"] Nov 29 07:30:00 crc kubenswrapper[4660]: E1129 07:30:00.156216 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="extract-content" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.156290 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="extract-content" Nov 29 07:30:00 crc kubenswrapper[4660]: E1129 07:30:00.156357 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="extract-utilities" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.156415 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="extract-utilities" Nov 29 07:30:00 crc kubenswrapper[4660]: E1129 07:30:00.156476 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.156534 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.156753 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8f9ae1-75b8-442d-a4a7-b39d373f54a7" containerName="registry-server" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.157194 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.160940 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns"] Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.188150 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.188371 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.246859 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6137363e-6e77-46e6-b455-9a8faf6119ba-secret-volume\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.246909 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plfcn\" (UniqueName: \"kubernetes.io/projected/6137363e-6e77-46e6-b455-9a8faf6119ba-kube-api-access-plfcn\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.246932 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6137363e-6e77-46e6-b455-9a8faf6119ba-config-volume\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.348756 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plfcn\" (UniqueName: \"kubernetes.io/projected/6137363e-6e77-46e6-b455-9a8faf6119ba-kube-api-access-plfcn\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.348828 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6137363e-6e77-46e6-b455-9a8faf6119ba-config-volume\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.349000 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6137363e-6e77-46e6-b455-9a8faf6119ba-secret-volume\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.350212 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6137363e-6e77-46e6-b455-9a8faf6119ba-config-volume\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.357725 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6137363e-6e77-46e6-b455-9a8faf6119ba-secret-volume\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.375550 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plfcn\" (UniqueName: \"kubernetes.io/projected/6137363e-6e77-46e6-b455-9a8faf6119ba-kube-api-access-plfcn\") pod \"collect-profiles-29406690-285ns\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.507880 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:00 crc kubenswrapper[4660]: I1129 07:30:00.928689 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns"] Nov 29 07:30:01 crc kubenswrapper[4660]: I1129 07:30:01.574151 4660 generic.go:334] "Generic (PLEG): container finished" podID="6137363e-6e77-46e6-b455-9a8faf6119ba" containerID="098415aca6441cf9608411e26e850ff68008bd3bb0acc628f45ed8c998d51a24" exitCode=0 Nov 29 07:30:01 crc kubenswrapper[4660]: I1129 07:30:01.574403 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" event={"ID":"6137363e-6e77-46e6-b455-9a8faf6119ba","Type":"ContainerDied","Data":"098415aca6441cf9608411e26e850ff68008bd3bb0acc628f45ed8c998d51a24"} Nov 29 07:30:01 crc kubenswrapper[4660]: I1129 07:30:01.574539 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" event={"ID":"6137363e-6e77-46e6-b455-9a8faf6119ba","Type":"ContainerStarted","Data":"a634766714a49c49efd1eb47cfac8bca4dc03388dfa4969d587e65b2a8e1d16b"} Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.793171 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.888083 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6137363e-6e77-46e6-b455-9a8faf6119ba-secret-volume\") pod \"6137363e-6e77-46e6-b455-9a8faf6119ba\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.888140 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plfcn\" (UniqueName: \"kubernetes.io/projected/6137363e-6e77-46e6-b455-9a8faf6119ba-kube-api-access-plfcn\") pod \"6137363e-6e77-46e6-b455-9a8faf6119ba\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.888211 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6137363e-6e77-46e6-b455-9a8faf6119ba-config-volume\") pod \"6137363e-6e77-46e6-b455-9a8faf6119ba\" (UID: \"6137363e-6e77-46e6-b455-9a8faf6119ba\") " Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.888767 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6137363e-6e77-46e6-b455-9a8faf6119ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "6137363e-6e77-46e6-b455-9a8faf6119ba" (UID: "6137363e-6e77-46e6-b455-9a8faf6119ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.893280 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6137363e-6e77-46e6-b455-9a8faf6119ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6137363e-6e77-46e6-b455-9a8faf6119ba" (UID: "6137363e-6e77-46e6-b455-9a8faf6119ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.893859 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6137363e-6e77-46e6-b455-9a8faf6119ba-kube-api-access-plfcn" (OuterVolumeSpecName: "kube-api-access-plfcn") pod "6137363e-6e77-46e6-b455-9a8faf6119ba" (UID: "6137363e-6e77-46e6-b455-9a8faf6119ba"). InnerVolumeSpecName "kube-api-access-plfcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.989471 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6137363e-6e77-46e6-b455-9a8faf6119ba-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.989505 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6137363e-6e77-46e6-b455-9a8faf6119ba-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:02 crc kubenswrapper[4660]: I1129 07:30:02.989517 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plfcn\" (UniqueName: \"kubernetes.io/projected/6137363e-6e77-46e6-b455-9a8faf6119ba-kube-api-access-plfcn\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:03 crc kubenswrapper[4660]: I1129 07:30:03.584644 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" event={"ID":"6137363e-6e77-46e6-b455-9a8faf6119ba","Type":"ContainerDied","Data":"a634766714a49c49efd1eb47cfac8bca4dc03388dfa4969d587e65b2a8e1d16b"} Nov 29 07:30:03 crc kubenswrapper[4660]: I1129 07:30:03.584862 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a634766714a49c49efd1eb47cfac8bca4dc03388dfa4969d587e65b2a8e1d16b" Nov 29 07:30:03 crc kubenswrapper[4660]: I1129 07:30:03.584750 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.358924 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm"] Nov 29 07:30:09 crc kubenswrapper[4660]: E1129 07:30:09.359629 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6137363e-6e77-46e6-b455-9a8faf6119ba" containerName="collect-profiles" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.359641 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6137363e-6e77-46e6-b455-9a8faf6119ba" containerName="collect-profiles" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.359747 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6137363e-6e77-46e6-b455-9a8faf6119ba" containerName="collect-profiles" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.360468 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.364379 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.370122 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm"] Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.473081 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.473140 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.473161 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6r9q\" (UniqueName: \"kubernetes.io/projected/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-kube-api-access-b6r9q\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.574561 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.575055 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.575313 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6r9q\" (UniqueName: \"kubernetes.io/projected/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-kube-api-access-b6r9q\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.574997 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.575276 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.593209 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6r9q\" (UniqueName: \"kubernetes.io/projected/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-kube-api-access-b6r9q\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.683729 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 29 07:30:09 crc kubenswrapper[4660]: I1129 07:30:09.691587 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:10 crc kubenswrapper[4660]: I1129 07:30:10.128117 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm"] Nov 29 07:30:10 crc kubenswrapper[4660]: I1129 07:30:10.620237 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" event={"ID":"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9","Type":"ContainerStarted","Data":"c68992af7694bc7b92f2d953d8dd657bad0166d8dd34ac7b94e4c987bf9f0b98"} Nov 29 07:30:11 crc kubenswrapper[4660]: I1129 07:30:11.610579 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8qjn8" podUID="f46e1d0c-84fc-4518-9101-a64174cee99a" containerName="console" containerID="cri-o://d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25" gracePeriod=15 Nov 29 07:30:11 crc kubenswrapper[4660]: I1129 07:30:11.628405 4660 generic.go:334] "Generic (PLEG): container finished" podID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerID="b44e9cd1d9d3434a4a2dde571d42a682f94f172aa1e158172896e31e1c223a4d" exitCode=0 Nov 29 07:30:11 crc kubenswrapper[4660]: I1129 07:30:11.628455 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" event={"ID":"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9","Type":"ContainerDied","Data":"b44e9cd1d9d3434a4a2dde571d42a682f94f172aa1e158172896e31e1c223a4d"} Nov 29 07:30:11 crc kubenswrapper[4660]: I1129 07:30:11.949901 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8qjn8_f46e1d0c-84fc-4518-9101-a64174cee99a/console/0.log" Nov 29 07:30:11 crc kubenswrapper[4660]: I1129 07:30:11.950169 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.106970 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-serving-cert\") pod \"f46e1d0c-84fc-4518-9101-a64174cee99a\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.107313 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-service-ca\") pod \"f46e1d0c-84fc-4518-9101-a64174cee99a\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.107534 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-oauth-config\") pod \"f46e1d0c-84fc-4518-9101-a64174cee99a\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.107631 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-console-config\") pod \"f46e1d0c-84fc-4518-9101-a64174cee99a\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.107704 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-trusted-ca-bundle\") pod \"f46e1d0c-84fc-4518-9101-a64174cee99a\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.107734 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfxpj\" (UniqueName: \"kubernetes.io/projected/f46e1d0c-84fc-4518-9101-a64174cee99a-kube-api-access-lfxpj\") pod \"f46e1d0c-84fc-4518-9101-a64174cee99a\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.107803 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-oauth-serving-cert\") pod \"f46e1d0c-84fc-4518-9101-a64174cee99a\" (UID: \"f46e1d0c-84fc-4518-9101-a64174cee99a\") " Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.107982 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-service-ca" (OuterVolumeSpecName: "service-ca") pod "f46e1d0c-84fc-4518-9101-a64174cee99a" (UID: "f46e1d0c-84fc-4518-9101-a64174cee99a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.108146 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-service-ca\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.110083 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-console-config" (OuterVolumeSpecName: "console-config") pod "f46e1d0c-84fc-4518-9101-a64174cee99a" (UID: "f46e1d0c-84fc-4518-9101-a64174cee99a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.110645 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f46e1d0c-84fc-4518-9101-a64174cee99a" (UID: "f46e1d0c-84fc-4518-9101-a64174cee99a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.110666 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f46e1d0c-84fc-4518-9101-a64174cee99a" (UID: "f46e1d0c-84fc-4518-9101-a64174cee99a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.113381 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f46e1d0c-84fc-4518-9101-a64174cee99a" (UID: "f46e1d0c-84fc-4518-9101-a64174cee99a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.113933 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46e1d0c-84fc-4518-9101-a64174cee99a-kube-api-access-lfxpj" (OuterVolumeSpecName: "kube-api-access-lfxpj") pod "f46e1d0c-84fc-4518-9101-a64174cee99a" (UID: "f46e1d0c-84fc-4518-9101-a64174cee99a"). InnerVolumeSpecName "kube-api-access-lfxpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.114003 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f46e1d0c-84fc-4518-9101-a64174cee99a" (UID: "f46e1d0c-84fc-4518-9101-a64174cee99a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.209565 4660 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.209660 4660 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.209677 4660 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f46e1d0c-84fc-4518-9101-a64174cee99a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.209689 4660 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-console-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.209700 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f46e1d0c-84fc-4518-9101-a64174cee99a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.209711 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfxpj\" (UniqueName: \"kubernetes.io/projected/f46e1d0c-84fc-4518-9101-a64174cee99a-kube-api-access-lfxpj\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.635311 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8qjn8_f46e1d0c-84fc-4518-9101-a64174cee99a/console/0.log" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.635358 4660 generic.go:334] "Generic (PLEG): container finished" podID="f46e1d0c-84fc-4518-9101-a64174cee99a" containerID="d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25" exitCode=2 Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.635384 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8qjn8" event={"ID":"f46e1d0c-84fc-4518-9101-a64174cee99a","Type":"ContainerDied","Data":"d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25"} Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.635405 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8qjn8" event={"ID":"f46e1d0c-84fc-4518-9101-a64174cee99a","Type":"ContainerDied","Data":"a5b8f776bc378c9b48f563ab34a1e64c5921f39855f1999c2c38aeabacb43ccf"} Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.635410 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8qjn8" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.635420 4660 scope.go:117] "RemoveContainer" containerID="d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.658631 4660 scope.go:117] "RemoveContainer" containerID="d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25" Nov 29 07:30:12 crc kubenswrapper[4660]: E1129 07:30:12.659124 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25\": container with ID starting with d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25 not found: ID does not exist" containerID="d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.659161 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25"} err="failed to get container status \"d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25\": rpc error: code = NotFound desc = could not find container \"d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25\": container with ID starting with d751452945f89593b539ec3eb045dc129c6d9122f93c93f272d7f38c88035b25 not found: ID does not exist" Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.670867 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8qjn8"] Nov 29 07:30:12 crc kubenswrapper[4660]: I1129 07:30:12.676501 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8qjn8"] Nov 29 07:30:13 crc kubenswrapper[4660]: I1129 07:30:13.644699 4660 generic.go:334] "Generic (PLEG): container finished" podID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerID="4fd78efe560bcb30f9ec615b183344c6f29eb97a2fd4cf25e7589a924454ccda" exitCode=0 Nov 29 07:30:13 crc kubenswrapper[4660]: I1129 07:30:13.644775 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" event={"ID":"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9","Type":"ContainerDied","Data":"4fd78efe560bcb30f9ec615b183344c6f29eb97a2fd4cf25e7589a924454ccda"} Nov 29 07:30:13 crc kubenswrapper[4660]: I1129 07:30:13.707762 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46e1d0c-84fc-4518-9101-a64174cee99a" path="/var/lib/kubelet/pods/f46e1d0c-84fc-4518-9101-a64174cee99a/volumes" Nov 29 07:30:14 crc kubenswrapper[4660]: I1129 07:30:14.656879 4660 generic.go:334] "Generic (PLEG): container finished" podID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerID="555f9464ce7b0d16dadcfaa76f8d556ad0c82bcc719deb8f538ff0e272fe14a8" exitCode=0 Nov 29 07:30:14 crc kubenswrapper[4660]: I1129 07:30:14.656929 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" event={"ID":"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9","Type":"ContainerDied","Data":"555f9464ce7b0d16dadcfaa76f8d556ad0c82bcc719deb8f538ff0e272fe14a8"} Nov 29 07:30:15 crc kubenswrapper[4660]: I1129 07:30:15.886418 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:15 crc kubenswrapper[4660]: I1129 07:30:15.959523 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-bundle\") pod \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " Nov 29 07:30:15 crc kubenswrapper[4660]: I1129 07:30:15.959638 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6r9q\" (UniqueName: \"kubernetes.io/projected/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-kube-api-access-b6r9q\") pod \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " Nov 29 07:30:15 crc kubenswrapper[4660]: I1129 07:30:15.959666 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-util\") pod \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\" (UID: \"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9\") " Nov 29 07:30:15 crc kubenswrapper[4660]: I1129 07:30:15.960593 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-bundle" (OuterVolumeSpecName: "bundle") pod "60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" (UID: "60b7eb5e-6d0c-47e0-bdfe-20c1069056a9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:30:15 crc kubenswrapper[4660]: I1129 07:30:15.966968 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-kube-api-access-b6r9q" (OuterVolumeSpecName: "kube-api-access-b6r9q") pod "60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" (UID: "60b7eb5e-6d0c-47e0-bdfe-20c1069056a9"). InnerVolumeSpecName "kube-api-access-b6r9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:30:15 crc kubenswrapper[4660]: I1129 07:30:15.974366 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-util" (OuterVolumeSpecName: "util") pod "60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" (UID: "60b7eb5e-6d0c-47e0-bdfe-20c1069056a9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:30:16 crc kubenswrapper[4660]: I1129 07:30:16.061509 4660 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:16 crc kubenswrapper[4660]: I1129 07:30:16.061563 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6r9q\" (UniqueName: \"kubernetes.io/projected/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-kube-api-access-b6r9q\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:16 crc kubenswrapper[4660]: I1129 07:30:16.061576 4660 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60b7eb5e-6d0c-47e0-bdfe-20c1069056a9-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:30:16 crc kubenswrapper[4660]: I1129 07:30:16.675213 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" event={"ID":"60b7eb5e-6d0c-47e0-bdfe-20c1069056a9","Type":"ContainerDied","Data":"c68992af7694bc7b92f2d953d8dd657bad0166d8dd34ac7b94e4c987bf9f0b98"} Nov 29 07:30:16 crc kubenswrapper[4660]: I1129 07:30:16.675712 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c68992af7694bc7b92f2d953d8dd657bad0166d8dd34ac7b94e4c987bf9f0b98" Nov 29 07:30:16 crc kubenswrapper[4660]: I1129 07:30:16.675337 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.529868 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp"] Nov 29 07:30:27 crc kubenswrapper[4660]: E1129 07:30:27.530538 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerName="pull" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.530550 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerName="pull" Nov 29 07:30:27 crc kubenswrapper[4660]: E1129 07:30:27.530567 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerName="extract" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.530574 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerName="extract" Nov 29 07:30:27 crc kubenswrapper[4660]: E1129 07:30:27.530581 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46e1d0c-84fc-4518-9101-a64174cee99a" containerName="console" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.530588 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46e1d0c-84fc-4518-9101-a64174cee99a" containerName="console" Nov 29 07:30:27 crc kubenswrapper[4660]: E1129 07:30:27.530599 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerName="util" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.530605 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerName="util" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.530708 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46e1d0c-84fc-4518-9101-a64174cee99a" containerName="console" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.530718 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b7eb5e-6d0c-47e0-bdfe-20c1069056a9" containerName="extract" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.531268 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.542956 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.543165 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.543218 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.543327 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.545784 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qdc8n" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.558797 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp"] Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.604740 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-apiservice-cert\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.604785 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5sf9\" (UniqueName: \"kubernetes.io/projected/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-kube-api-access-d5sf9\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.604812 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-webhook-cert\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.706579 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-apiservice-cert\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.706656 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5sf9\" (UniqueName: \"kubernetes.io/projected/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-kube-api-access-d5sf9\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.706690 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-webhook-cert\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.712422 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-webhook-cert\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.728994 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-apiservice-cert\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.730165 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5sf9\" (UniqueName: \"kubernetes.io/projected/0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87-kube-api-access-d5sf9\") pod \"metallb-operator-controller-manager-6cfc5c9847-cf8qp\" (UID: \"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87\") " pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.846097 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.991883 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c"] Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.992747 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.998333 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.998523 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hjzn5" Nov 29 07:30:27 crc kubenswrapper[4660]: I1129 07:30:27.998543 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.028176 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c"] Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.111024 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28d7af7a-86cc-4ceb-bc24-eab722a9813a-apiservice-cert\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.111083 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvf4v\" (UniqueName: \"kubernetes.io/projected/28d7af7a-86cc-4ceb-bc24-eab722a9813a-kube-api-access-zvf4v\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.111124 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28d7af7a-86cc-4ceb-bc24-eab722a9813a-webhook-cert\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.212236 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28d7af7a-86cc-4ceb-bc24-eab722a9813a-webhook-cert\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.212620 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28d7af7a-86cc-4ceb-bc24-eab722a9813a-apiservice-cert\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.212652 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvf4v\" (UniqueName: \"kubernetes.io/projected/28d7af7a-86cc-4ceb-bc24-eab722a9813a-kube-api-access-zvf4v\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.216820 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28d7af7a-86cc-4ceb-bc24-eab722a9813a-apiservice-cert\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.224104 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28d7af7a-86cc-4ceb-bc24-eab722a9813a-webhook-cert\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.230596 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvf4v\" (UniqueName: \"kubernetes.io/projected/28d7af7a-86cc-4ceb-bc24-eab722a9813a-kube-api-access-zvf4v\") pod \"metallb-operator-webhook-server-84c66bf9fd-dsq4c\" (UID: \"28d7af7a-86cc-4ceb-bc24-eab722a9813a\") " pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:28 crc kubenswrapper[4660]: I1129 07:30:28.307195 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:29 crc kubenswrapper[4660]: I1129 07:30:29.116079 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp"] Nov 29 07:30:29 crc kubenswrapper[4660]: I1129 07:30:29.305416 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c"] Nov 29 07:30:30 crc kubenswrapper[4660]: I1129 07:30:30.067340 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" event={"ID":"28d7af7a-86cc-4ceb-bc24-eab722a9813a","Type":"ContainerStarted","Data":"cd39b793a16b48e8203772bc125627b185b56f385907250c7e48eee5b030ba38"} Nov 29 07:30:30 crc kubenswrapper[4660]: I1129 07:30:30.069194 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" event={"ID":"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87","Type":"ContainerStarted","Data":"e1e9cad2b5a3eef8c3af9cf0df121cdd4def807874dce47523a150f2f3c79dc9"} Nov 29 07:30:42 crc kubenswrapper[4660]: I1129 07:30:42.137838 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" event={"ID":"28d7af7a-86cc-4ceb-bc24-eab722a9813a","Type":"ContainerStarted","Data":"4a9d64a91327d0f1a5a7eff9402186df88be1b467cfe86cb438b96320ec5aad8"} Nov 29 07:30:42 crc kubenswrapper[4660]: I1129 07:30:42.138391 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:30:42 crc kubenswrapper[4660]: I1129 07:30:42.145081 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" event={"ID":"0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87","Type":"ContainerStarted","Data":"eafd6f43098956067c84e8fc3f1e60b70f5f9497cf7df220c0671df306c1b191"} Nov 29 07:30:42 crc kubenswrapper[4660]: I1129 07:30:42.145781 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:30:42 crc kubenswrapper[4660]: I1129 07:30:42.160540 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" podStartSLOduration=2.616647891 podStartE2EDuration="15.160519181s" podCreationTimestamp="2025-11-29 07:30:27 +0000 UTC" firstStartedPulling="2025-11-29 07:30:29.315189638 +0000 UTC m=+919.868719537" lastFinishedPulling="2025-11-29 07:30:41.859060928 +0000 UTC m=+932.412590827" observedRunningTime="2025-11-29 07:30:42.154115535 +0000 UTC m=+932.707645444" watchObservedRunningTime="2025-11-29 07:30:42.160519181 +0000 UTC m=+932.714049080" Nov 29 07:30:42 crc kubenswrapper[4660]: I1129 07:30:42.186433 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" podStartSLOduration=2.491235503 podStartE2EDuration="15.186411976s" podCreationTimestamp="2025-11-29 07:30:27 +0000 UTC" firstStartedPulling="2025-11-29 07:30:29.139076041 +0000 UTC m=+919.692605940" lastFinishedPulling="2025-11-29 07:30:41.834252514 +0000 UTC m=+932.387782413" observedRunningTime="2025-11-29 07:30:42.180841122 +0000 UTC m=+932.734371031" watchObservedRunningTime="2025-11-29 07:30:42.186411976 +0000 UTC m=+932.739941885" Nov 29 07:30:58 crc kubenswrapper[4660]: I1129 07:30:58.313090 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84c66bf9fd-dsq4c" Nov 29 07:31:05 crc kubenswrapper[4660]: I1129 07:31:05.500166 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:31:05 crc kubenswrapper[4660]: I1129 07:31:05.500757 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:31:17 crc kubenswrapper[4660]: I1129 07:31:17.849481 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6cfc5c9847-cf8qp" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.630785 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4"] Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.631764 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.635659 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-szl5x"] Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.638303 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.642990 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.643281 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.643697 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.644029 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-twjnc" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.661891 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4"] Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.738489 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gcx42"] Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.739572 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gcx42" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.744760 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-t2bzl" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.745106 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.745531 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.752151 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.773588 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-cdr7b"] Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.774440 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.781404 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.789716 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-cdr7b"] Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803777 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05fec9d8-e898-467e-9938-33ce089b3d15-metrics-certs\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803820 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnkk4\" (UniqueName: \"kubernetes.io/projected/05fec9d8-e898-467e-9938-33ce089b3d15-kube-api-access-nnkk4\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803842 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-frr-sockets\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803861 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-metrics\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803878 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/05fec9d8-e898-467e-9938-33ce089b3d15-frr-startup\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803891 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-frr-conf\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803920 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-reloader\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803956 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7l4\" (UniqueName: \"kubernetes.io/projected/2ea3483d-b488-4691-b2f6-3bdb54b0ef49-kube-api-access-zm7l4\") pod \"frr-k8s-webhook-server-7fcb986d4-pf7m4\" (UID: \"2ea3483d-b488-4691-b2f6-3bdb54b0ef49\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.803973 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea3483d-b488-4691-b2f6-3bdb54b0ef49-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-pf7m4\" (UID: \"2ea3483d-b488-4691-b2f6-3bdb54b0ef49\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.904861 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-frr-sockets\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.904906 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-metrics\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.904931 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/05fec9d8-e898-467e-9938-33ce089b3d15-frr-startup\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.904947 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-frr-conf\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.905053 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-metrics-certs\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.905080 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ff906a3b-62c0-4073-afaf-67e927a77020-metallb-excludel2\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.905293 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb85aed1-c862-47ce-84e9-e5d44218faff-cert\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906092 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-reloader\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.905301 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-frr-sockets\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.905551 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-frr-conf\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906043 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/05fec9d8-e898-467e-9938-33ce089b3d15-frr-startup\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.905393 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-metrics\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906313 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/05fec9d8-e898-467e-9938-33ce089b3d15-reloader\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906450 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b79ps\" (UniqueName: \"kubernetes.io/projected/fb85aed1-c862-47ce-84e9-e5d44218faff-kube-api-access-b79ps\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906482 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4jn\" (UniqueName: \"kubernetes.io/projected/ff906a3b-62c0-4073-afaf-67e927a77020-kube-api-access-hb4jn\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906563 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm7l4\" (UniqueName: \"kubernetes.io/projected/2ea3483d-b488-4691-b2f6-3bdb54b0ef49-kube-api-access-zm7l4\") pod \"frr-k8s-webhook-server-7fcb986d4-pf7m4\" (UID: \"2ea3483d-b488-4691-b2f6-3bdb54b0ef49\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906583 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea3483d-b488-4691-b2f6-3bdb54b0ef49-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-pf7m4\" (UID: \"2ea3483d-b488-4691-b2f6-3bdb54b0ef49\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906898 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.906984 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb85aed1-c862-47ce-84e9-e5d44218faff-metrics-certs\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.907005 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05fec9d8-e898-467e-9938-33ce089b3d15-metrics-certs\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.907026 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnkk4\" (UniqueName: \"kubernetes.io/projected/05fec9d8-e898-467e-9938-33ce089b3d15-kube-api-access-nnkk4\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: E1129 07:31:18.907236 4660 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 29 07:31:18 crc kubenswrapper[4660]: E1129 07:31:18.907284 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05fec9d8-e898-467e-9938-33ce089b3d15-metrics-certs podName:05fec9d8-e898-467e-9938-33ce089b3d15 nodeName:}" failed. No retries permitted until 2025-11-29 07:31:19.407271586 +0000 UTC m=+969.960801485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05fec9d8-e898-467e-9938-33ce089b3d15-metrics-certs") pod "frr-k8s-szl5x" (UID: "05fec9d8-e898-467e-9938-33ce089b3d15") : secret "frr-k8s-certs-secret" not found Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.912821 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ea3483d-b488-4691-b2f6-3bdb54b0ef49-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-pf7m4\" (UID: \"2ea3483d-b488-4691-b2f6-3bdb54b0ef49\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.928350 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm7l4\" (UniqueName: \"kubernetes.io/projected/2ea3483d-b488-4691-b2f6-3bdb54b0ef49-kube-api-access-zm7l4\") pod \"frr-k8s-webhook-server-7fcb986d4-pf7m4\" (UID: \"2ea3483d-b488-4691-b2f6-3bdb54b0ef49\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.932237 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnkk4\" (UniqueName: \"kubernetes.io/projected/05fec9d8-e898-467e-9938-33ce089b3d15-kube-api-access-nnkk4\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:18 crc kubenswrapper[4660]: I1129 07:31:18.949129 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.007754 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-metrics-certs\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.007839 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ff906a3b-62c0-4073-afaf-67e927a77020-metallb-excludel2\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.007875 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb85aed1-c862-47ce-84e9-e5d44218faff-cert\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.007915 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b79ps\" (UniqueName: \"kubernetes.io/projected/fb85aed1-c862-47ce-84e9-e5d44218faff-kube-api-access-b79ps\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.007948 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4jn\" (UniqueName: \"kubernetes.io/projected/ff906a3b-62c0-4073-afaf-67e927a77020-kube-api-access-hb4jn\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.007993 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: E1129 07:31:19.008148 4660 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:31:19 crc kubenswrapper[4660]: E1129 07:31:19.008211 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist podName:ff906a3b-62c0-4073-afaf-67e927a77020 nodeName:}" failed. No retries permitted until 2025-11-29 07:31:19.508190709 +0000 UTC m=+970.061720608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist") pod "speaker-gcx42" (UID: "ff906a3b-62c0-4073-afaf-67e927a77020") : secret "metallb-memberlist" not found Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.008339 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb85aed1-c862-47ce-84e9-e5d44218faff-metrics-certs\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.008805 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ff906a3b-62c0-4073-afaf-67e927a77020-metallb-excludel2\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.009731 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.014284 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-metrics-certs\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.014351 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb85aed1-c862-47ce-84e9-e5d44218faff-metrics-certs\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.021312 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb85aed1-c862-47ce-84e9-e5d44218faff-cert\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.024980 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4jn\" (UniqueName: \"kubernetes.io/projected/ff906a3b-62c0-4073-afaf-67e927a77020-kube-api-access-hb4jn\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.025289 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b79ps\" (UniqueName: \"kubernetes.io/projected/fb85aed1-c862-47ce-84e9-e5d44218faff-kube-api-access-b79ps\") pod \"controller-f8648f98b-cdr7b\" (UID: \"fb85aed1-c862-47ce-84e9-e5d44218faff\") " pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.091960 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.210690 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4"] Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.342369 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-cdr7b"] Nov 29 07:31:19 crc kubenswrapper[4660]: W1129 07:31:19.346924 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb85aed1_c862_47ce_84e9_e5d44218faff.slice/crio-8da65c7eeee904a95238fdfa4e948ebd28b9a5a257c2146540c3cd2df0acd0ff WatchSource:0}: Error finding container 8da65c7eeee904a95238fdfa4e948ebd28b9a5a257c2146540c3cd2df0acd0ff: Status 404 returned error can't find the container with id 8da65c7eeee904a95238fdfa4e948ebd28b9a5a257c2146540c3cd2df0acd0ff Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.361016 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" event={"ID":"2ea3483d-b488-4691-b2f6-3bdb54b0ef49","Type":"ContainerStarted","Data":"8c23a30ed7e17f0c7f80816b0d4e0414a36c883e334dd0392f372c068665e103"} Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.361972 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cdr7b" event={"ID":"fb85aed1-c862-47ce-84e9-e5d44218faff","Type":"ContainerStarted","Data":"8da65c7eeee904a95238fdfa4e948ebd28b9a5a257c2146540c3cd2df0acd0ff"} Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.412194 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05fec9d8-e898-467e-9938-33ce089b3d15-metrics-certs\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.417677 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05fec9d8-e898-467e-9938-33ce089b3d15-metrics-certs\") pod \"frr-k8s-szl5x\" (UID: \"05fec9d8-e898-467e-9938-33ce089b3d15\") " pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.514279 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:19 crc kubenswrapper[4660]: E1129 07:31:19.514442 4660 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:31:19 crc kubenswrapper[4660]: E1129 07:31:19.514502 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist podName:ff906a3b-62c0-4073-afaf-67e927a77020 nodeName:}" failed. No retries permitted until 2025-11-29 07:31:20.514488212 +0000 UTC m=+971.068018111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist") pod "speaker-gcx42" (UID: "ff906a3b-62c0-4073-afaf-67e927a77020") : secret "metallb-memberlist" not found Nov 29 07:31:19 crc kubenswrapper[4660]: I1129 07:31:19.557402 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-szl5x" Nov 29 07:31:20 crc kubenswrapper[4660]: I1129 07:31:20.531038 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:20 crc kubenswrapper[4660]: E1129 07:31:20.531743 4660 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:31:20 crc kubenswrapper[4660]: E1129 07:31:20.531848 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist podName:ff906a3b-62c0-4073-afaf-67e927a77020 nodeName:}" failed. No retries permitted until 2025-11-29 07:31:22.531811877 +0000 UTC m=+973.085341816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist") pod "speaker-gcx42" (UID: "ff906a3b-62c0-4073-afaf-67e927a77020") : secret "metallb-memberlist" not found Nov 29 07:31:22 crc kubenswrapper[4660]: I1129 07:31:22.555946 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:22 crc kubenswrapper[4660]: E1129 07:31:22.556125 4660 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:31:22 crc kubenswrapper[4660]: E1129 07:31:22.556456 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist podName:ff906a3b-62c0-4073-afaf-67e927a77020 nodeName:}" failed. No retries permitted until 2025-11-29 07:31:26.556437443 +0000 UTC m=+977.109967342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist") pod "speaker-gcx42" (UID: "ff906a3b-62c0-4073-afaf-67e927a77020") : secret "metallb-memberlist" not found Nov 29 07:31:26 crc kubenswrapper[4660]: I1129 07:31:26.594523 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:26 crc kubenswrapper[4660]: E1129 07:31:26.594714 4660 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 29 07:31:26 crc kubenswrapper[4660]: E1129 07:31:26.595508 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist podName:ff906a3b-62c0-4073-afaf-67e927a77020 nodeName:}" failed. No retries permitted until 2025-11-29 07:31:34.595487447 +0000 UTC m=+985.149017346 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist") pod "speaker-gcx42" (UID: "ff906a3b-62c0-4073-afaf-67e927a77020") : secret "metallb-memberlist" not found Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.253690 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jjvbm"] Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.255309 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.269949 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjvbm"] Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.413237 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-catalog-content\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.413306 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j45c\" (UniqueName: \"kubernetes.io/projected/41ebb470-f873-48aa-b72d-582bb1e393b9-kube-api-access-7j45c\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.413387 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-utilities\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.413639 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cdr7b" event={"ID":"fb85aed1-c862-47ce-84e9-e5d44218faff","Type":"ContainerStarted","Data":"fd9da92a1bcae6407ba31a42849c77393991b15af165a736531847f6c6e26d56"} Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.414845 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerStarted","Data":"0c87914a687c1b247abc850b87623564ee133387f1fda4c18aaad59eeffd2260"} Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.514994 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-catalog-content\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.515042 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j45c\" (UniqueName: \"kubernetes.io/projected/41ebb470-f873-48aa-b72d-582bb1e393b9-kube-api-access-7j45c\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.515094 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-utilities\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.515521 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-catalog-content\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.515575 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-utilities\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.533741 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j45c\" (UniqueName: \"kubernetes.io/projected/41ebb470-f873-48aa-b72d-582bb1e393b9-kube-api-access-7j45c\") pod \"redhat-marketplace-jjvbm\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.578273 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:31:28 crc kubenswrapper[4660]: I1129 07:31:28.865123 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjvbm"] Nov 29 07:31:29 crc kubenswrapper[4660]: I1129 07:31:29.424095 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjvbm" event={"ID":"41ebb470-f873-48aa-b72d-582bb1e393b9","Type":"ContainerStarted","Data":"8ed8b218ebf1426256edf191e8cd941f9f2180e7c8c40bd2c086764b18ca530c"} Nov 29 07:31:31 crc kubenswrapper[4660]: E1129 07:31:31.988526 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage979898162/1\": happened during read: context canceled" image="registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" Nov 29 07:31:31 crc kubenswrapper[4660]: E1129 07:31:31.989065 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm7l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7fcb986d4-pf7m4_metallb-system(2ea3483d-b488-4691-b2f6-3bdb54b0ef49): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage979898162/1\": happened during read: context canceled" logger="UnhandledError" Nov 29 07:31:31 crc kubenswrapper[4660]: E1129 07:31:31.990275 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage979898162/1\\\": happened during read: context canceled\"" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" podUID="2ea3483d-b488-4691-b2f6-3bdb54b0ef49" Nov 29 07:31:32 crc kubenswrapper[4660]: I1129 07:31:32.443931 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cdr7b" event={"ID":"fb85aed1-c862-47ce-84e9-e5d44218faff","Type":"ContainerStarted","Data":"e7557b539340d3c93dce7f12b30fd673fde21897eaf9a57c1c70ff86e7f5c9e6"} Nov 29 07:31:32 crc kubenswrapper[4660]: I1129 07:31:32.444345 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:32 crc kubenswrapper[4660]: I1129 07:31:32.445764 4660 generic.go:334] "Generic (PLEG): container finished" podID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerID="ee26392e0e3d59e5fa272ee48da3b8a7535145f73336f82cac32060e8ae0a232" exitCode=0 Nov 29 07:31:32 crc kubenswrapper[4660]: I1129 07:31:32.445808 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjvbm" event={"ID":"41ebb470-f873-48aa-b72d-582bb1e393b9","Type":"ContainerDied","Data":"ee26392e0e3d59e5fa272ee48da3b8a7535145f73336f82cac32060e8ae0a232"} Nov 29 07:31:32 crc kubenswrapper[4660]: E1129 07:31:32.447773 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" podUID="2ea3483d-b488-4691-b2f6-3bdb54b0ef49" Nov 29 07:31:32 crc kubenswrapper[4660]: I1129 07:31:32.464551 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-cdr7b" podStartSLOduration=14.464533506 podStartE2EDuration="14.464533506s" podCreationTimestamp="2025-11-29 07:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:31:32.463885608 +0000 UTC m=+983.017415507" watchObservedRunningTime="2025-11-29 07:31:32.464533506 +0000 UTC m=+983.018063425" Nov 29 07:31:34 crc kubenswrapper[4660]: I1129 07:31:34.619714 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:34 crc kubenswrapper[4660]: I1129 07:31:34.630460 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ff906a3b-62c0-4073-afaf-67e927a77020-memberlist\") pod \"speaker-gcx42\" (UID: \"ff906a3b-62c0-4073-afaf-67e927a77020\") " pod="metallb-system/speaker-gcx42" Nov 29 07:31:34 crc kubenswrapper[4660]: I1129 07:31:34.652583 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gcx42" Nov 29 07:31:35 crc kubenswrapper[4660]: I1129 07:31:35.467949 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gcx42" event={"ID":"ff906a3b-62c0-4073-afaf-67e927a77020","Type":"ContainerStarted","Data":"537db7c6be820599be365a0b2b8b4a00ee77f31f33cd8ca17d7f6e45d7d58ecb"} Nov 29 07:31:35 crc kubenswrapper[4660]: I1129 07:31:35.501049 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:31:35 crc kubenswrapper[4660]: I1129 07:31:35.501377 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:31:38 crc kubenswrapper[4660]: I1129 07:31:38.488081 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gcx42" event={"ID":"ff906a3b-62c0-4073-afaf-67e927a77020","Type":"ContainerStarted","Data":"74a14bf4830a45b7ab48ccd032565cbad03ea3ed31a13f3333c4a681bd3f8c35"} Nov 29 07:31:39 crc kubenswrapper[4660]: I1129 07:31:39.096360 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-cdr7b" Nov 29 07:31:41 crc kubenswrapper[4660]: I1129 07:31:41.528423 4660 generic.go:334] "Generic (PLEG): container finished" podID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerID="f355f6e5fd54e3b95dde27ba2670d286ca3e19a5d2961c5efc38d45db732af89" exitCode=0 Nov 29 07:31:41 crc kubenswrapper[4660]: I1129 07:31:41.528666 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjvbm" event={"ID":"41ebb470-f873-48aa-b72d-582bb1e393b9","Type":"ContainerDied","Data":"f355f6e5fd54e3b95dde27ba2670d286ca3e19a5d2961c5efc38d45db732af89"} Nov 29 07:31:41 crc kubenswrapper[4660]: I1129 07:31:41.533956 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gcx42" event={"ID":"ff906a3b-62c0-4073-afaf-67e927a77020","Type":"ContainerStarted","Data":"ee408f57d61e7b44affd1e9cb340c18e46db67b5b41326ffdcfbdfdeb57fc19c"} Nov 29 07:31:41 crc kubenswrapper[4660]: I1129 07:31:41.534174 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gcx42" Nov 29 07:31:41 crc kubenswrapper[4660]: I1129 07:31:41.579752 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gcx42" podStartSLOduration=23.579725906 podStartE2EDuration="23.579725906s" podCreationTimestamp="2025-11-29 07:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:31:41.578315507 +0000 UTC m=+992.131845406" watchObservedRunningTime="2025-11-29 07:31:41.579725906 +0000 UTC m=+992.133255845" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.700581 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7r6kk"] Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.703223 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.745984 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7r6kk"] Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.858285 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-catalog-content\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.858640 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfhb\" (UniqueName: \"kubernetes.io/projected/c75da9a6-0430-44f0-9aa4-24d2fb355893-kube-api-access-2gfhb\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.858681 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-utilities\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.960245 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gfhb\" (UniqueName: \"kubernetes.io/projected/c75da9a6-0430-44f0-9aa4-24d2fb355893-kube-api-access-2gfhb\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.960312 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-utilities\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.960368 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-catalog-content\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.960914 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-catalog-content\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.961447 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-utilities\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:42 crc kubenswrapper[4660]: I1129 07:31:42.992924 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gfhb\" (UniqueName: \"kubernetes.io/projected/c75da9a6-0430-44f0-9aa4-24d2fb355893-kube-api-access-2gfhb\") pod \"certified-operators-7r6kk\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:43 crc kubenswrapper[4660]: I1129 07:31:43.065901 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:31:54 crc kubenswrapper[4660]: I1129 07:31:54.657209 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gcx42" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.570852 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7n9hx"] Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.571970 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7n9hx" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.585978 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7n9hx"] Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.586183 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.586415 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hpjcs" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.589399 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.594967 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kc8\" (UniqueName: \"kubernetes.io/projected/78fddffe-8340-4e5d-8e40-d493d219d3ee-kube-api-access-j8kc8\") pod \"openstack-operator-index-7n9hx\" (UID: \"78fddffe-8340-4e5d-8e40-d493d219d3ee\") " pod="openstack-operators/openstack-operator-index-7n9hx" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.696428 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8kc8\" (UniqueName: \"kubernetes.io/projected/78fddffe-8340-4e5d-8e40-d493d219d3ee-kube-api-access-j8kc8\") pod \"openstack-operator-index-7n9hx\" (UID: \"78fddffe-8340-4e5d-8e40-d493d219d3ee\") " pod="openstack-operators/openstack-operator-index-7n9hx" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.726658 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8kc8\" (UniqueName: \"kubernetes.io/projected/78fddffe-8340-4e5d-8e40-d493d219d3ee-kube-api-access-j8kc8\") pod \"openstack-operator-index-7n9hx\" (UID: \"78fddffe-8340-4e5d-8e40-d493d219d3ee\") " pod="openstack-operators/openstack-operator-index-7n9hx" Nov 29 07:31:57 crc kubenswrapper[4660]: I1129 07:31:57.900004 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7n9hx" Nov 29 07:31:59 crc kubenswrapper[4660]: I1129 07:31:59.639241 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7n9hx"] Nov 29 07:31:59 crc kubenswrapper[4660]: I1129 07:31:59.712890 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7r6kk"] Nov 29 07:32:00 crc kubenswrapper[4660]: I1129 07:32:00.651589 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r6kk" event={"ID":"c75da9a6-0430-44f0-9aa4-24d2fb355893","Type":"ContainerStarted","Data":"025d39b2edac0e25ce6a3274508269e747ef8600686797ed0fb41e3d1ead1d74"} Nov 29 07:32:00 crc kubenswrapper[4660]: I1129 07:32:00.653852 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7n9hx" event={"ID":"78fddffe-8340-4e5d-8e40-d493d219d3ee","Type":"ContainerStarted","Data":"237dcceea03da3b26a072aac9eaa8283dc0f919317c5761cd04bf0875a9ecb39"} Nov 29 07:32:02 crc kubenswrapper[4660]: I1129 07:32:02.122223 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7n9hx"] Nov 29 07:32:02 crc kubenswrapper[4660]: I1129 07:32:02.931072 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6hcx4"] Nov 29 07:32:02 crc kubenswrapper[4660]: I1129 07:32:02.931945 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:02 crc kubenswrapper[4660]: I1129 07:32:02.953093 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6hcx4"] Nov 29 07:32:02 crc kubenswrapper[4660]: I1129 07:32:02.968350 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cc6v\" (UniqueName: \"kubernetes.io/projected/8afce996-f777-4ef3-a57d-d09faabc1b46-kube-api-access-6cc6v\") pod \"openstack-operator-index-6hcx4\" (UID: \"8afce996-f777-4ef3-a57d-d09faabc1b46\") " pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:03 crc kubenswrapper[4660]: I1129 07:32:03.070008 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cc6v\" (UniqueName: \"kubernetes.io/projected/8afce996-f777-4ef3-a57d-d09faabc1b46-kube-api-access-6cc6v\") pod \"openstack-operator-index-6hcx4\" (UID: \"8afce996-f777-4ef3-a57d-d09faabc1b46\") " pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:03 crc kubenswrapper[4660]: I1129 07:32:03.091169 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cc6v\" (UniqueName: \"kubernetes.io/projected/8afce996-f777-4ef3-a57d-d09faabc1b46-kube-api-access-6cc6v\") pod \"openstack-operator-index-6hcx4\" (UID: \"8afce996-f777-4ef3-a57d-d09faabc1b46\") " pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:03 crc kubenswrapper[4660]: I1129 07:32:03.263003 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:03 crc kubenswrapper[4660]: I1129 07:32:03.681065 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6hcx4"] Nov 29 07:32:03 crc kubenswrapper[4660]: W1129 07:32:03.684909 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8afce996_f777_4ef3_a57d_d09faabc1b46.slice/crio-f119824405e04c0d8693f9df7a1a57ffa9805c71bcf59cd2c390ffce333bc147 WatchSource:0}: Error finding container f119824405e04c0d8693f9df7a1a57ffa9805c71bcf59cd2c390ffce333bc147: Status 404 returned error can't find the container with id f119824405e04c0d8693f9df7a1a57ffa9805c71bcf59cd2c390ffce333bc147 Nov 29 07:32:04 crc kubenswrapper[4660]: I1129 07:32:04.677795 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6hcx4" event={"ID":"8afce996-f777-4ef3-a57d-d09faabc1b46","Type":"ContainerStarted","Data":"f119824405e04c0d8693f9df7a1a57ffa9805c71bcf59cd2c390ffce333bc147"} Nov 29 07:32:04 crc kubenswrapper[4660]: E1129 07:32:04.807805 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" Nov 29 07:32:04 crc kubenswrapper[4660]: E1129 07:32:04.808266 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnkk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-szl5x_metallb-system(05fec9d8-e898-467e-9938-33ce089b3d15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 29 07:32:04 crc kubenswrapper[4660]: E1129 07:32:04.809838 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="metallb-system/frr-k8s-szl5x" podUID="05fec9d8-e898-467e-9938-33ce089b3d15" Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.499899 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.499957 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.500001 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.500441 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcd84865061a683fd99b3d22cec95cee8b6991ac454110033b3fc10f47f460b1"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.500493 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://dcd84865061a683fd99b3d22cec95cee8b6991ac454110033b3fc10f47f460b1" gracePeriod=600 Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.688940 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="dcd84865061a683fd99b3d22cec95cee8b6991ac454110033b3fc10f47f460b1" exitCode=0 Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.689019 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"dcd84865061a683fd99b3d22cec95cee8b6991ac454110033b3fc10f47f460b1"} Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.689063 4660 scope.go:117] "RemoveContainer" containerID="ba7bcb77e4d299d679fd34242a1b77b4792c3db7cdb7365569436d0dd85e0583" Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.691104 4660 generic.go:334] "Generic (PLEG): container finished" podID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerID="12ca23909338a39434a35fccf15abaea5e0778b955063e55d2f886668f421cc8" exitCode=0 Nov 29 07:32:05 crc kubenswrapper[4660]: I1129 07:32:05.691183 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r6kk" event={"ID":"c75da9a6-0430-44f0-9aa4-24d2fb355893","Type":"ContainerDied","Data":"12ca23909338a39434a35fccf15abaea5e0778b955063e55d2f886668f421cc8"} Nov 29 07:32:08 crc kubenswrapper[4660]: E1129 07:32:08.441326 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a\\\"\"" pod="metallb-system/frr-k8s-szl5x" podUID="05fec9d8-e898-467e-9938-33ce089b3d15" Nov 29 07:32:12 crc kubenswrapper[4660]: I1129 07:32:12.757344 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"bd511a85552f8f6a0486302ddd3dd88b243fb575cbf96f9f78b0be146b756d4a"} Nov 29 07:32:15 crc kubenswrapper[4660]: I1129 07:32:15.773313 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjvbm" event={"ID":"41ebb470-f873-48aa-b72d-582bb1e393b9","Type":"ContainerStarted","Data":"9b1df81dd78c517d1da78c010c8d49d7d2d332150d9ef9e53014a40ac2ef8c5f"} Nov 29 07:32:16 crc kubenswrapper[4660]: I1129 07:32:16.797792 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jjvbm" podStartSLOduration=11.205409865 podStartE2EDuration="48.797777607s" podCreationTimestamp="2025-11-29 07:31:28 +0000 UTC" firstStartedPulling="2025-11-29 07:31:32.447344063 +0000 UTC m=+983.000873962" lastFinishedPulling="2025-11-29 07:32:10.039711795 +0000 UTC m=+1020.593241704" observedRunningTime="2025-11-29 07:32:16.795568686 +0000 UTC m=+1027.349098585" watchObservedRunningTime="2025-11-29 07:32:16.797777607 +0000 UTC m=+1027.351307506" Nov 29 07:32:18 crc kubenswrapper[4660]: I1129 07:32:18.579309 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:32:18 crc kubenswrapper[4660]: I1129 07:32:18.579386 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:32:18 crc kubenswrapper[4660]: I1129 07:32:18.644126 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:32:25 crc kubenswrapper[4660]: E1129 07:32:25.355098 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.124:5001/openstack-k8s-operators/openstack-operator-index:3aa569ba0ab6c593fcdf83fcb2a7a1f3431918f1" Nov 29 07:32:25 crc kubenswrapper[4660]: E1129 07:32:25.355600 4660 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.124:5001/openstack-k8s-operators/openstack-operator-index:3aa569ba0ab6c593fcdf83fcb2a7a1f3431918f1" Nov 29 07:32:25 crc kubenswrapper[4660]: E1129 07:32:25.355733 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:38.129.56.124:5001/openstack-k8s-operators/openstack-operator-index:3aa569ba0ab6c593fcdf83fcb2a7a1f3431918f1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8kc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-operator-index-7n9hx_openstack-operators(78fddffe-8340-4e5d-8e40-d493d219d3ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:32:25 crc kubenswrapper[4660]: E1129 07:32:25.357873 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-operator-index-7n9hx" podUID="78fddffe-8340-4e5d-8e40-d493d219d3ee" Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.115372 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7n9hx" Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.266882 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8kc8\" (UniqueName: \"kubernetes.io/projected/78fddffe-8340-4e5d-8e40-d493d219d3ee-kube-api-access-j8kc8\") pod \"78fddffe-8340-4e5d-8e40-d493d219d3ee\" (UID: \"78fddffe-8340-4e5d-8e40-d493d219d3ee\") " Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.278885 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fddffe-8340-4e5d-8e40-d493d219d3ee-kube-api-access-j8kc8" (OuterVolumeSpecName: "kube-api-access-j8kc8") pod "78fddffe-8340-4e5d-8e40-d493d219d3ee" (UID: "78fddffe-8340-4e5d-8e40-d493d219d3ee"). InnerVolumeSpecName "kube-api-access-j8kc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.368216 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8kc8\" (UniqueName: \"kubernetes.io/projected/78fddffe-8340-4e5d-8e40-d493d219d3ee-kube-api-access-j8kc8\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.855596 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7n9hx" event={"ID":"78fddffe-8340-4e5d-8e40-d493d219d3ee","Type":"ContainerDied","Data":"237dcceea03da3b26a072aac9eaa8283dc0f919317c5761cd04bf0875a9ecb39"} Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.855721 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7n9hx" Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.913588 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7n9hx"] Nov 29 07:32:26 crc kubenswrapper[4660]: I1129 07:32:26.922393 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-7n9hx"] Nov 29 07:32:27 crc kubenswrapper[4660]: I1129 07:32:27.701967 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fddffe-8340-4e5d-8e40-d493d219d3ee" path="/var/lib/kubelet/pods/78fddffe-8340-4e5d-8e40-d493d219d3ee/volumes" Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.621675 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.659253 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjvbm"] Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.868201 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" event={"ID":"2ea3483d-b488-4691-b2f6-3bdb54b0ef49","Type":"ContainerStarted","Data":"f2e2a862e91c35116a2887d58505f2cf807096380164655affdeb0dd4710adfd"} Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.868407 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.869694 4660 generic.go:334] "Generic (PLEG): container finished" podID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerID="655289139d9ef3cd38697f9bd30b4020c9c4171e07a53123c44dcecaa600f8e3" exitCode=0 Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.869776 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r6kk" event={"ID":"c75da9a6-0430-44f0-9aa4-24d2fb355893","Type":"ContainerDied","Data":"655289139d9ef3cd38697f9bd30b4020c9c4171e07a53123c44dcecaa600f8e3"} Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.871098 4660 generic.go:334] "Generic (PLEG): container finished" podID="05fec9d8-e898-467e-9938-33ce089b3d15" containerID="0cac98b75ec696774d5782c05bcd17519a2d00cb1ea8c348e4c81a6ecedea9aa" exitCode=0 Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.871216 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerDied","Data":"0cac98b75ec696774d5782c05bcd17519a2d00cb1ea8c348e4c81a6ecedea9aa"} Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.871292 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jjvbm" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="registry-server" containerID="cri-o://9b1df81dd78c517d1da78c010c8d49d7d2d332150d9ef9e53014a40ac2ef8c5f" gracePeriod=2 Nov 29 07:32:28 crc kubenswrapper[4660]: I1129 07:32:28.890152 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" podStartSLOduration=19.354009603 podStartE2EDuration="1m10.890132175s" podCreationTimestamp="2025-11-29 07:31:18 +0000 UTC" firstStartedPulling="2025-11-29 07:31:19.220240527 +0000 UTC m=+969.773770416" lastFinishedPulling="2025-11-29 07:32:10.756363089 +0000 UTC m=+1021.309892988" observedRunningTime="2025-11-29 07:32:28.886748011 +0000 UTC m=+1039.440277920" watchObservedRunningTime="2025-11-29 07:32:28.890132175 +0000 UTC m=+1039.443662074" Nov 29 07:32:29 crc kubenswrapper[4660]: I1129 07:32:29.880688 4660 generic.go:334] "Generic (PLEG): container finished" podID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerID="9b1df81dd78c517d1da78c010c8d49d7d2d332150d9ef9e53014a40ac2ef8c5f" exitCode=0 Nov 29 07:32:29 crc kubenswrapper[4660]: I1129 07:32:29.881014 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjvbm" event={"ID":"41ebb470-f873-48aa-b72d-582bb1e393b9","Type":"ContainerDied","Data":"9b1df81dd78c517d1da78c010c8d49d7d2d332150d9ef9e53014a40ac2ef8c5f"} Nov 29 07:32:29 crc kubenswrapper[4660]: I1129 07:32:29.884073 4660 generic.go:334] "Generic (PLEG): container finished" podID="05fec9d8-e898-467e-9938-33ce089b3d15" containerID="d578753b83292c1362b199660a7f989f2038764b4a9e901e21be739927323d27" exitCode=0 Nov 29 07:32:29 crc kubenswrapper[4660]: I1129 07:32:29.884712 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerDied","Data":"d578753b83292c1362b199660a7f989f2038764b4a9e901e21be739927323d27"} Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.420189 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.523956 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j45c\" (UniqueName: \"kubernetes.io/projected/41ebb470-f873-48aa-b72d-582bb1e393b9-kube-api-access-7j45c\") pod \"41ebb470-f873-48aa-b72d-582bb1e393b9\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.524245 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-utilities\") pod \"41ebb470-f873-48aa-b72d-582bb1e393b9\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.524327 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-catalog-content\") pod \"41ebb470-f873-48aa-b72d-582bb1e393b9\" (UID: \"41ebb470-f873-48aa-b72d-582bb1e393b9\") " Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.525741 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-utilities" (OuterVolumeSpecName: "utilities") pod "41ebb470-f873-48aa-b72d-582bb1e393b9" (UID: "41ebb470-f873-48aa-b72d-582bb1e393b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.529209 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ebb470-f873-48aa-b72d-582bb1e393b9-kube-api-access-7j45c" (OuterVolumeSpecName: "kube-api-access-7j45c") pod "41ebb470-f873-48aa-b72d-582bb1e393b9" (UID: "41ebb470-f873-48aa-b72d-582bb1e393b9"). InnerVolumeSpecName "kube-api-access-7j45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.540817 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41ebb470-f873-48aa-b72d-582bb1e393b9" (UID: "41ebb470-f873-48aa-b72d-582bb1e393b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.625815 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j45c\" (UniqueName: \"kubernetes.io/projected/41ebb470-f873-48aa-b72d-582bb1e393b9-kube-api-access-7j45c\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.625855 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.625867 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ebb470-f873-48aa-b72d-582bb1e393b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.891190 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjvbm" event={"ID":"41ebb470-f873-48aa-b72d-582bb1e393b9","Type":"ContainerDied","Data":"8ed8b218ebf1426256edf191e8cd941f9f2180e7c8c40bd2c086764b18ca530c"} Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.891225 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjvbm" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.891517 4660 scope.go:117] "RemoveContainer" containerID="9b1df81dd78c517d1da78c010c8d49d7d2d332150d9ef9e53014a40ac2ef8c5f" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.893626 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r6kk" event={"ID":"c75da9a6-0430-44f0-9aa4-24d2fb355893","Type":"ContainerStarted","Data":"29d0e4fbb0e7ff29e72ba19e0d660fae61b2b6e93c9c0dae42bb943ee3604645"} Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.894541 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6hcx4" event={"ID":"8afce996-f777-4ef3-a57d-d09faabc1b46","Type":"ContainerStarted","Data":"fb078a480159d0d425329415b07aa9d72738a18c84af63081696e31c0e5cf45e"} Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.897159 4660 generic.go:334] "Generic (PLEG): container finished" podID="05fec9d8-e898-467e-9938-33ce089b3d15" containerID="f3c77c42263a7a47f6ed000f6011955a2664440c593aa64c359b6f81def2fac0" exitCode=0 Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.897186 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerDied","Data":"f3c77c42263a7a47f6ed000f6011955a2664440c593aa64c359b6f81def2fac0"} Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.908503 4660 scope.go:117] "RemoveContainer" containerID="f355f6e5fd54e3b95dde27ba2670d286ca3e19a5d2961c5efc38d45db732af89" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.935538 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7r6kk" podStartSLOduration=28.211407537 podStartE2EDuration="48.935522246s" podCreationTimestamp="2025-11-29 07:31:42 +0000 UTC" firstStartedPulling="2025-11-29 07:32:09.295178945 +0000 UTC m=+1019.848708844" lastFinishedPulling="2025-11-29 07:32:30.019293654 +0000 UTC m=+1040.572823553" observedRunningTime="2025-11-29 07:32:30.917592402 +0000 UTC m=+1041.471122291" watchObservedRunningTime="2025-11-29 07:32:30.935522246 +0000 UTC m=+1041.489052145" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.935940 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6hcx4" podStartSLOduration=2.602975645 podStartE2EDuration="28.935934377s" podCreationTimestamp="2025-11-29 07:32:02 +0000 UTC" firstStartedPulling="2025-11-29 07:32:03.687036542 +0000 UTC m=+1014.240566441" lastFinishedPulling="2025-11-29 07:32:30.019995274 +0000 UTC m=+1040.573525173" observedRunningTime="2025-11-29 07:32:30.934328233 +0000 UTC m=+1041.487858152" watchObservedRunningTime="2025-11-29 07:32:30.935934377 +0000 UTC m=+1041.489464276" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.954408 4660 scope.go:117] "RemoveContainer" containerID="ee26392e0e3d59e5fa272ee48da3b8a7535145f73336f82cac32060e8ae0a232" Nov 29 07:32:30 crc kubenswrapper[4660]: I1129 07:32:30.997753 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjvbm"] Nov 29 07:32:31 crc kubenswrapper[4660]: I1129 07:32:31.001977 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjvbm"] Nov 29 07:32:31 crc kubenswrapper[4660]: I1129 07:32:31.700254 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" path="/var/lib/kubelet/pods/41ebb470-f873-48aa-b72d-582bb1e393b9/volumes" Nov 29 07:32:31 crc kubenswrapper[4660]: I1129 07:32:31.908405 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerStarted","Data":"a867394153271cdaa5c170ef0ffbf82d4762f322c0312a961d9285171a31b8d2"} Nov 29 07:32:31 crc kubenswrapper[4660]: I1129 07:32:31.908772 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerStarted","Data":"521e9526aa09af76108036a69cdb838a5627f3e639adb1c7a014c9750130634d"} Nov 29 07:32:31 crc kubenswrapper[4660]: I1129 07:32:31.908802 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerStarted","Data":"1472a7f66e0dbad0efa1b6e10d16afb746452e2cbfbe3445be0e3c43e81ab75f"} Nov 29 07:32:32 crc kubenswrapper[4660]: I1129 07:32:32.942145 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerStarted","Data":"b34a2485693e51593f4be0adbf9b77a183a912c27205be7e2d7aa199bc3c1823"} Nov 29 07:32:32 crc kubenswrapper[4660]: I1129 07:32:32.942374 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerStarted","Data":"0322b9eb4029ecee0c5793db391972a65dc59ea4e6019a6b4dd3f31551f77f1f"} Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.067366 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.067412 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.115466 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.263247 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.263293 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.295557 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.954739 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szl5x" event={"ID":"05fec9d8-e898-467e-9938-33ce089b3d15","Type":"ContainerStarted","Data":"e1f860f04fcc85d5bc40faf506bf9b9fa3495f93076908c95639b02782f867e8"} Nov 29 07:32:33 crc kubenswrapper[4660]: I1129 07:32:33.995257 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-szl5x" podStartSLOduration=-9223371960.859547 podStartE2EDuration="1m15.995228949s" podCreationTimestamp="2025-11-29 07:31:18 +0000 UTC" firstStartedPulling="2025-11-29 07:31:28.074079541 +0000 UTC m=+978.627609440" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:32:33.984632917 +0000 UTC m=+1044.538162816" watchObservedRunningTime="2025-11-29 07:32:33.995228949 +0000 UTC m=+1044.548758908" Nov 29 07:32:34 crc kubenswrapper[4660]: I1129 07:32:34.557741 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-szl5x" Nov 29 07:32:34 crc kubenswrapper[4660]: I1129 07:32:34.596759 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-szl5x" Nov 29 07:32:34 crc kubenswrapper[4660]: I1129 07:32:34.959570 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-szl5x" Nov 29 07:32:38 crc kubenswrapper[4660]: I1129 07:32:38.959211 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" Nov 29 07:32:43 crc kubenswrapper[4660]: I1129 07:32:43.108658 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:32:43 crc kubenswrapper[4660]: I1129 07:32:43.148725 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7r6kk"] Nov 29 07:32:43 crc kubenswrapper[4660]: I1129 07:32:43.290410 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6hcx4" Nov 29 07:32:44 crc kubenswrapper[4660]: I1129 07:32:44.013350 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7r6kk" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="registry-server" containerID="cri-o://29d0e4fbb0e7ff29e72ba19e0d660fae61b2b6e93c9c0dae42bb943ee3604645" gracePeriod=2 Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.028203 4660 generic.go:334] "Generic (PLEG): container finished" podID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerID="29d0e4fbb0e7ff29e72ba19e0d660fae61b2b6e93c9c0dae42bb943ee3604645" exitCode=0 Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.028279 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r6kk" event={"ID":"c75da9a6-0430-44f0-9aa4-24d2fb355893","Type":"ContainerDied","Data":"29d0e4fbb0e7ff29e72ba19e0d660fae61b2b6e93c9c0dae42bb943ee3604645"} Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.482798 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.642203 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-catalog-content\") pod \"c75da9a6-0430-44f0-9aa4-24d2fb355893\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.642468 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-utilities\") pod \"c75da9a6-0430-44f0-9aa4-24d2fb355893\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.642560 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gfhb\" (UniqueName: \"kubernetes.io/projected/c75da9a6-0430-44f0-9aa4-24d2fb355893-kube-api-access-2gfhb\") pod \"c75da9a6-0430-44f0-9aa4-24d2fb355893\" (UID: \"c75da9a6-0430-44f0-9aa4-24d2fb355893\") " Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.644507 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-utilities" (OuterVolumeSpecName: "utilities") pod "c75da9a6-0430-44f0-9aa4-24d2fb355893" (UID: "c75da9a6-0430-44f0-9aa4-24d2fb355893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.656792 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75da9a6-0430-44f0-9aa4-24d2fb355893-kube-api-access-2gfhb" (OuterVolumeSpecName: "kube-api-access-2gfhb") pod "c75da9a6-0430-44f0-9aa4-24d2fb355893" (UID: "c75da9a6-0430-44f0-9aa4-24d2fb355893"). InnerVolumeSpecName "kube-api-access-2gfhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.684983 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c75da9a6-0430-44f0-9aa4-24d2fb355893" (UID: "c75da9a6-0430-44f0-9aa4-24d2fb355893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.745027 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.745061 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c75da9a6-0430-44f0-9aa4-24d2fb355893-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:46 crc kubenswrapper[4660]: I1129 07:32:46.745071 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gfhb\" (UniqueName: \"kubernetes.io/projected/c75da9a6-0430-44f0-9aa4-24d2fb355893-kube-api-access-2gfhb\") on node \"crc\" DevicePath \"\"" Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.037929 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r6kk" event={"ID":"c75da9a6-0430-44f0-9aa4-24d2fb355893","Type":"ContainerDied","Data":"025d39b2edac0e25ce6a3274508269e747ef8600686797ed0fb41e3d1ead1d74"} Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.038036 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r6kk" Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.038306 4660 scope.go:117] "RemoveContainer" containerID="29d0e4fbb0e7ff29e72ba19e0d660fae61b2b6e93c9c0dae42bb943ee3604645" Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.063308 4660 scope.go:117] "RemoveContainer" containerID="655289139d9ef3cd38697f9bd30b4020c9c4171e07a53123c44dcecaa600f8e3" Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.076163 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7r6kk"] Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.094297 4660 scope.go:117] "RemoveContainer" containerID="12ca23909338a39434a35fccf15abaea5e0778b955063e55d2f886668f421cc8" Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.099861 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7r6kk"] Nov 29 07:32:47 crc kubenswrapper[4660]: I1129 07:32:47.708377 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" path="/var/lib/kubelet/pods/c75da9a6-0430-44f0-9aa4-24d2fb355893/volumes" Nov 29 07:32:49 crc kubenswrapper[4660]: I1129 07:32:49.567378 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-szl5x" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.557463 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl"] Nov 29 07:32:53 crc kubenswrapper[4660]: E1129 07:32:53.558243 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="extract-utilities" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558259 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="extract-utilities" Nov 29 07:32:53 crc kubenswrapper[4660]: E1129 07:32:53.558272 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="registry-server" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558280 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="registry-server" Nov 29 07:32:53 crc kubenswrapper[4660]: E1129 07:32:53.558289 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="registry-server" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558298 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="registry-server" Nov 29 07:32:53 crc kubenswrapper[4660]: E1129 07:32:53.558313 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="extract-utilities" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558321 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="extract-utilities" Nov 29 07:32:53 crc kubenswrapper[4660]: E1129 07:32:53.558334 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="extract-content" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558342 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="extract-content" Nov 29 07:32:53 crc kubenswrapper[4660]: E1129 07:32:53.558353 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="extract-content" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558360 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="extract-content" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558488 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75da9a6-0430-44f0-9aa4-24d2fb355893" containerName="registry-server" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.558510 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ebb470-f873-48aa-b72d-582bb1e393b9" containerName="registry-server" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.559501 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.561432 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-twq7r" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.570699 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl"] Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.638496 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfbkh\" (UniqueName: \"kubernetes.io/projected/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-kube-api-access-cfbkh\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.638561 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-util\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.638682 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-bundle\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.739400 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfbkh\" (UniqueName: \"kubernetes.io/projected/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-kube-api-access-cfbkh\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.739457 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-util\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.739502 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-bundle\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.739942 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-bundle\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.740082 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-util\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.761811 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfbkh\" (UniqueName: \"kubernetes.io/projected/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-kube-api-access-cfbkh\") pod \"dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:53 crc kubenswrapper[4660]: I1129 07:32:53.874037 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:32:54 crc kubenswrapper[4660]: I1129 07:32:54.094913 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl"] Nov 29 07:32:55 crc kubenswrapper[4660]: I1129 07:32:55.086920 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" event={"ID":"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27","Type":"ContainerStarted","Data":"d9f873bce4f69354fe735ae9ec1f1b69b0178cd1437c45935371b51493b807c3"} Nov 29 07:32:56 crc kubenswrapper[4660]: I1129 07:32:56.096562 4660 generic.go:334] "Generic (PLEG): container finished" podID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerID="1a06feb411dff2e7d5794ee590fa01f1aaf54d1b00271fcb8c37dfa516229ecf" exitCode=0 Nov 29 07:32:56 crc kubenswrapper[4660]: I1129 07:32:56.096662 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" event={"ID":"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27","Type":"ContainerDied","Data":"1a06feb411dff2e7d5794ee590fa01f1aaf54d1b00271fcb8c37dfa516229ecf"} Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.113706 4660 generic.go:334] "Generic (PLEG): container finished" podID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerID="209221c5db0e8a4150c55137ba71f6d05b89937e1604a68459359ab3981eb272" exitCode=0 Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.115278 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" event={"ID":"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27","Type":"ContainerDied","Data":"209221c5db0e8a4150c55137ba71f6d05b89937e1604a68459359ab3981eb272"} Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.134124 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lhnfc"] Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.135459 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.160147 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhnfc"] Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.199270 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-catalog-content\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.199339 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-utilities\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.199412 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxf2f\" (UniqueName: \"kubernetes.io/projected/4b84b84c-f9be-4916-88ea-08785f632ba3-kube-api-access-nxf2f\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.300444 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxf2f\" (UniqueName: \"kubernetes.io/projected/4b84b84c-f9be-4916-88ea-08785f632ba3-kube-api-access-nxf2f\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.300522 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-catalog-content\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.300557 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-utilities\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.301006 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-utilities\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.301265 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-catalog-content\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.321034 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxf2f\" (UniqueName: \"kubernetes.io/projected/4b84b84c-f9be-4916-88ea-08785f632ba3-kube-api-access-nxf2f\") pod \"community-operators-lhnfc\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.450060 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:32:58 crc kubenswrapper[4660]: I1129 07:32:58.754646 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhnfc"] Nov 29 07:32:58 crc kubenswrapper[4660]: W1129 07:32:58.758009 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b84b84c_f9be_4916_88ea_08785f632ba3.slice/crio-f983f0c704b433c8a0c728765c4408bef721083c6524703eb1e1c945b9413c0f WatchSource:0}: Error finding container f983f0c704b433c8a0c728765c4408bef721083c6524703eb1e1c945b9413c0f: Status 404 returned error can't find the container with id f983f0c704b433c8a0c728765c4408bef721083c6524703eb1e1c945b9413c0f Nov 29 07:32:59 crc kubenswrapper[4660]: I1129 07:32:59.121589 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhnfc" event={"ID":"4b84b84c-f9be-4916-88ea-08785f632ba3","Type":"ContainerStarted","Data":"f983f0c704b433c8a0c728765c4408bef721083c6524703eb1e1c945b9413c0f"} Nov 29 07:32:59 crc kubenswrapper[4660]: I1129 07:32:59.124026 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" event={"ID":"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27","Type":"ContainerStarted","Data":"c68fd6e1f2b269ac5fd1e789e9aa9d71212f29c2a1700d51700c2607ff898740"} Nov 29 07:33:00 crc kubenswrapper[4660]: I1129 07:33:00.135383 4660 generic.go:334] "Generic (PLEG): container finished" podID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerID="322fbf1fd723a395a46dabfed8b891cf1e6031020655f250c63629fbc48c92ab" exitCode=0 Nov 29 07:33:00 crc kubenswrapper[4660]: I1129 07:33:00.135449 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhnfc" event={"ID":"4b84b84c-f9be-4916-88ea-08785f632ba3","Type":"ContainerDied","Data":"322fbf1fd723a395a46dabfed8b891cf1e6031020655f250c63629fbc48c92ab"} Nov 29 07:33:00 crc kubenswrapper[4660]: I1129 07:33:00.142045 4660 generic.go:334] "Generic (PLEG): container finished" podID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerID="c68fd6e1f2b269ac5fd1e789e9aa9d71212f29c2a1700d51700c2607ff898740" exitCode=0 Nov 29 07:33:00 crc kubenswrapper[4660]: I1129 07:33:00.142104 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" event={"ID":"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27","Type":"ContainerDied","Data":"c68fd6e1f2b269ac5fd1e789e9aa9d71212f29c2a1700d51700c2607ff898740"} Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.440492 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.548358 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-bundle\") pod \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.548485 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbkh\" (UniqueName: \"kubernetes.io/projected/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-kube-api-access-cfbkh\") pod \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.548516 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-util\") pod \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\" (UID: \"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27\") " Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.549714 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-bundle" (OuterVolumeSpecName: "bundle") pod "1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" (UID: "1dafe4cc-65b9-45d7-9e59-4d26b6bbea27"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.555241 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-kube-api-access-cfbkh" (OuterVolumeSpecName: "kube-api-access-cfbkh") pod "1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" (UID: "1dafe4cc-65b9-45d7-9e59-4d26b6bbea27"). InnerVolumeSpecName "kube-api-access-cfbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.563830 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-util" (OuterVolumeSpecName: "util") pod "1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" (UID: "1dafe4cc-65b9-45d7-9e59-4d26b6bbea27"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.650564 4660 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.650631 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbkh\" (UniqueName: \"kubernetes.io/projected/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-kube-api-access-cfbkh\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:01 crc kubenswrapper[4660]: I1129 07:33:01.650654 4660 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dafe4cc-65b9-45d7-9e59-4d26b6bbea27-util\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:02 crc kubenswrapper[4660]: I1129 07:33:02.159826 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" event={"ID":"1dafe4cc-65b9-45d7-9e59-4d26b6bbea27","Type":"ContainerDied","Data":"d9f873bce4f69354fe735ae9ec1f1b69b0178cd1437c45935371b51493b807c3"} Nov 29 07:33:02 crc kubenswrapper[4660]: I1129 07:33:02.159867 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl" Nov 29 07:33:02 crc kubenswrapper[4660]: I1129 07:33:02.159886 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f873bce4f69354fe735ae9ec1f1b69b0178cd1437c45935371b51493b807c3" Nov 29 07:33:02 crc kubenswrapper[4660]: I1129 07:33:02.162170 4660 generic.go:334] "Generic (PLEG): container finished" podID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerID="9e8bd1e263e29b5548267da7cf916740de529d0336343d7b2c52216b4b46380a" exitCode=0 Nov 29 07:33:02 crc kubenswrapper[4660]: I1129 07:33:02.162217 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhnfc" event={"ID":"4b84b84c-f9be-4916-88ea-08785f632ba3","Type":"ContainerDied","Data":"9e8bd1e263e29b5548267da7cf916740de529d0336343d7b2c52216b4b46380a"} Nov 29 07:33:03 crc kubenswrapper[4660]: I1129 07:33:03.170749 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhnfc" event={"ID":"4b84b84c-f9be-4916-88ea-08785f632ba3","Type":"ContainerStarted","Data":"81bd498d4d3e8af4d340f3fe81f7f5f7ca6ce4f4ee1a953dc78d343320719499"} Nov 29 07:33:03 crc kubenswrapper[4660]: I1129 07:33:03.195118 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lhnfc" podStartSLOduration=2.707285514 podStartE2EDuration="5.195094381s" podCreationTimestamp="2025-11-29 07:32:58 +0000 UTC" firstStartedPulling="2025-11-29 07:33:00.138410551 +0000 UTC m=+1070.691940450" lastFinishedPulling="2025-11-29 07:33:02.626219418 +0000 UTC m=+1073.179749317" observedRunningTime="2025-11-29 07:33:03.188665963 +0000 UTC m=+1073.742195862" watchObservedRunningTime="2025-11-29 07:33:03.195094381 +0000 UTC m=+1073.748624310" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.672638 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2"] Nov 29 07:33:04 crc kubenswrapper[4660]: E1129 07:33:04.673037 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerName="util" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.673048 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerName="util" Nov 29 07:33:04 crc kubenswrapper[4660]: E1129 07:33:04.673061 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerName="extract" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.673066 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerName="extract" Nov 29 07:33:04 crc kubenswrapper[4660]: E1129 07:33:04.673082 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerName="pull" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.673089 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerName="pull" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.673183 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dafe4cc-65b9-45d7-9e59-4d26b6bbea27" containerName="extract" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.673580 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.694389 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-88vqb" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.706360 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2"] Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.800038 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzdq\" (UniqueName: \"kubernetes.io/projected/b9aba585-e5b4-47a1-904b-f3f1f86d6251-kube-api-access-jbzdq\") pod \"openstack-operator-controller-operator-f9fd8cd-p4sd2\" (UID: \"b9aba585-e5b4-47a1-904b-f3f1f86d6251\") " pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.901504 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzdq\" (UniqueName: \"kubernetes.io/projected/b9aba585-e5b4-47a1-904b-f3f1f86d6251-kube-api-access-jbzdq\") pod \"openstack-operator-controller-operator-f9fd8cd-p4sd2\" (UID: \"b9aba585-e5b4-47a1-904b-f3f1f86d6251\") " pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.920565 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzdq\" (UniqueName: \"kubernetes.io/projected/b9aba585-e5b4-47a1-904b-f3f1f86d6251-kube-api-access-jbzdq\") pod \"openstack-operator-controller-operator-f9fd8cd-p4sd2\" (UID: \"b9aba585-e5b4-47a1-904b-f3f1f86d6251\") " pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" Nov 29 07:33:04 crc kubenswrapper[4660]: I1129 07:33:04.990247 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" Nov 29 07:33:05 crc kubenswrapper[4660]: W1129 07:33:05.278465 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9aba585_e5b4_47a1_904b_f3f1f86d6251.slice/crio-f55ed4fd3bf3036ba27857b64ed1080aa448b4901cc22602180b49de6c79926c WatchSource:0}: Error finding container f55ed4fd3bf3036ba27857b64ed1080aa448b4901cc22602180b49de6c79926c: Status 404 returned error can't find the container with id f55ed4fd3bf3036ba27857b64ed1080aa448b4901cc22602180b49de6c79926c Nov 29 07:33:05 crc kubenswrapper[4660]: I1129 07:33:05.284031 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2"] Nov 29 07:33:06 crc kubenswrapper[4660]: I1129 07:33:06.207574 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" event={"ID":"b9aba585-e5b4-47a1-904b-f3f1f86d6251","Type":"ContainerStarted","Data":"f55ed4fd3bf3036ba27857b64ed1080aa448b4901cc22602180b49de6c79926c"} Nov 29 07:33:08 crc kubenswrapper[4660]: I1129 07:33:08.450960 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:33:08 crc kubenswrapper[4660]: I1129 07:33:08.451252 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:33:08 crc kubenswrapper[4660]: I1129 07:33:08.490539 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:33:09 crc kubenswrapper[4660]: I1129 07:33:09.301368 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:33:10 crc kubenswrapper[4660]: I1129 07:33:10.914122 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhnfc"] Nov 29 07:33:11 crc kubenswrapper[4660]: I1129 07:33:11.246234 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lhnfc" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="registry-server" containerID="cri-o://81bd498d4d3e8af4d340f3fe81f7f5f7ca6ce4f4ee1a953dc78d343320719499" gracePeriod=2 Nov 29 07:33:12 crc kubenswrapper[4660]: I1129 07:33:12.253629 4660 generic.go:334] "Generic (PLEG): container finished" podID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerID="81bd498d4d3e8af4d340f3fe81f7f5f7ca6ce4f4ee1a953dc78d343320719499" exitCode=0 Nov 29 07:33:12 crc kubenswrapper[4660]: I1129 07:33:12.253949 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhnfc" event={"ID":"4b84b84c-f9be-4916-88ea-08785f632ba3","Type":"ContainerDied","Data":"81bd498d4d3e8af4d340f3fe81f7f5f7ca6ce4f4ee1a953dc78d343320719499"} Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.099464 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.223487 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-utilities\") pod \"4b84b84c-f9be-4916-88ea-08785f632ba3\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.223591 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-catalog-content\") pod \"4b84b84c-f9be-4916-88ea-08785f632ba3\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.223707 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxf2f\" (UniqueName: \"kubernetes.io/projected/4b84b84c-f9be-4916-88ea-08785f632ba3-kube-api-access-nxf2f\") pod \"4b84b84c-f9be-4916-88ea-08785f632ba3\" (UID: \"4b84b84c-f9be-4916-88ea-08785f632ba3\") " Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.224675 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-utilities" (OuterVolumeSpecName: "utilities") pod "4b84b84c-f9be-4916-88ea-08785f632ba3" (UID: "4b84b84c-f9be-4916-88ea-08785f632ba3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.239921 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b84b84c-f9be-4916-88ea-08785f632ba3-kube-api-access-nxf2f" (OuterVolumeSpecName: "kube-api-access-nxf2f") pod "4b84b84c-f9be-4916-88ea-08785f632ba3" (UID: "4b84b84c-f9be-4916-88ea-08785f632ba3"). InnerVolumeSpecName "kube-api-access-nxf2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.262512 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhnfc" event={"ID":"4b84b84c-f9be-4916-88ea-08785f632ba3","Type":"ContainerDied","Data":"f983f0c704b433c8a0c728765c4408bef721083c6524703eb1e1c945b9413c0f"} Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.262572 4660 scope.go:117] "RemoveContainer" containerID="81bd498d4d3e8af4d340f3fe81f7f5f7ca6ce4f4ee1a953dc78d343320719499" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.262588 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhnfc" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.284918 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b84b84c-f9be-4916-88ea-08785f632ba3" (UID: "4b84b84c-f9be-4916-88ea-08785f632ba3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.325891 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.325949 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b84b84c-f9be-4916-88ea-08785f632ba3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.325966 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxf2f\" (UniqueName: \"kubernetes.io/projected/4b84b84c-f9be-4916-88ea-08785f632ba3-kube-api-access-nxf2f\") on node \"crc\" DevicePath \"\"" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.360192 4660 scope.go:117] "RemoveContainer" containerID="9e8bd1e263e29b5548267da7cf916740de529d0336343d7b2c52216b4b46380a" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.417936 4660 scope.go:117] "RemoveContainer" containerID="322fbf1fd723a395a46dabfed8b891cf1e6031020655f250c63629fbc48c92ab" Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.597076 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhnfc"] Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.600884 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lhnfc"] Nov 29 07:33:13 crc kubenswrapper[4660]: I1129 07:33:13.700347 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" path="/var/lib/kubelet/pods/4b84b84c-f9be-4916-88ea-08785f632ba3/volumes" Nov 29 07:33:14 crc kubenswrapper[4660]: I1129 07:33:14.278963 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" event={"ID":"b9aba585-e5b4-47a1-904b-f3f1f86d6251","Type":"ContainerStarted","Data":"7825455882c318ce7a1a3d1552c5445f59c387fe96bb8071056f0ce1b3004845"} Nov 29 07:33:14 crc kubenswrapper[4660]: I1129 07:33:14.279912 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" Nov 29 07:33:14 crc kubenswrapper[4660]: I1129 07:33:14.309006 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" podStartSLOduration=2.172954208 podStartE2EDuration="10.308991422s" podCreationTimestamp="2025-11-29 07:33:04 +0000 UTC" firstStartedPulling="2025-11-29 07:33:05.281563492 +0000 UTC m=+1075.835093391" lastFinishedPulling="2025-11-29 07:33:13.417600706 +0000 UTC m=+1083.971130605" observedRunningTime="2025-11-29 07:33:14.305402974 +0000 UTC m=+1084.858932873" watchObservedRunningTime="2025-11-29 07:33:14.308991422 +0000 UTC m=+1084.862521321" Nov 29 07:33:24 crc kubenswrapper[4660]: I1129 07:33:24.992712 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-f9fd8cd-p4sd2" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.084809 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr"] Nov 29 07:33:49 crc kubenswrapper[4660]: E1129 07:33:49.085491 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="extract-utilities" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.085502 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="extract-utilities" Nov 29 07:33:49 crc kubenswrapper[4660]: E1129 07:33:49.085517 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="extract-content" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.085523 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="extract-content" Nov 29 07:33:49 crc kubenswrapper[4660]: E1129 07:33:49.085536 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="registry-server" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.085544 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="registry-server" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.085680 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b84b84c-f9be-4916-88ea-08785f632ba3" containerName="registry-server" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.086246 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.096971 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-spt2v" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.103953 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.118043 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.118905 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.121156 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-gk9w2" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.145853 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.147102 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.151093 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fbgd7" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.151943 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.182705 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.195993 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbn5x\" (UniqueName: \"kubernetes.io/projected/81afdf1a-a8f8-4f69-8824-192bcf14424c-kube-api-access-qbn5x\") pod \"designate-operator-controller-manager-78b4bc895b-jdqzs\" (UID: \"81afdf1a-a8f8-4f69-8824-192bcf14424c\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.196341 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6l9c\" (UniqueName: \"kubernetes.io/projected/f0b999b3-e302-40ca-a1aa-5173b5655498-kube-api-access-z6l9c\") pod \"barbican-operator-controller-manager-59d587b55-wqktr\" (UID: \"f0b999b3-e302-40ca-a1aa-5173b5655498\") " pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.196497 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jm85\" (UniqueName: \"kubernetes.io/projected/0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd-kube-api-access-6jm85\") pod \"cinder-operator-controller-manager-859b6ccc6-cmgp5\" (UID: \"0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.214046 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.215422 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.217077 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6768t" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.231689 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.248736 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.249721 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.253017 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j2k59" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.272920 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.274531 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.280507 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.283900 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jggk8" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.298268 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdjg\" (UniqueName: \"kubernetes.io/projected/7ce83127-45e9-4a96-8815-538f3bde77ed-kube-api-access-npdjg\") pod \"glance-operator-controller-manager-668d9c48b9-4gjhw\" (UID: \"7ce83127-45e9-4a96-8815-538f3bde77ed\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.298495 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbn5x\" (UniqueName: \"kubernetes.io/projected/81afdf1a-a8f8-4f69-8824-192bcf14424c-kube-api-access-qbn5x\") pod \"designate-operator-controller-manager-78b4bc895b-jdqzs\" (UID: \"81afdf1a-a8f8-4f69-8824-192bcf14424c\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.298636 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6l9c\" (UniqueName: \"kubernetes.io/projected/f0b999b3-e302-40ca-a1aa-5173b5655498-kube-api-access-z6l9c\") pod \"barbican-operator-controller-manager-59d587b55-wqktr\" (UID: \"f0b999b3-e302-40ca-a1aa-5173b5655498\") " pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.298758 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k8bm\" (UniqueName: \"kubernetes.io/projected/29c0443d-0d08-4708-b268-07ae28680e01-kube-api-access-8k8bm\") pod \"heat-operator-controller-manager-5f64f6f8bb-v9rs2\" (UID: \"29c0443d-0d08-4708-b268-07ae28680e01\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.298855 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jm85\" (UniqueName: \"kubernetes.io/projected/0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd-kube-api-access-6jm85\") pod \"cinder-operator-controller-manager-859b6ccc6-cmgp5\" (UID: \"0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.304823 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.305856 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.310721 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.311060 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wvsw8" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.311303 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.326842 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.329114 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6l9c\" (UniqueName: \"kubernetes.io/projected/f0b999b3-e302-40ca-a1aa-5173b5655498-kube-api-access-z6l9c\") pod \"barbican-operator-controller-manager-59d587b55-wqktr\" (UID: \"f0b999b3-e302-40ca-a1aa-5173b5655498\") " pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.338925 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.339021 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbn5x\" (UniqueName: \"kubernetes.io/projected/81afdf1a-a8f8-4f69-8824-192bcf14424c-kube-api-access-qbn5x\") pod \"designate-operator-controller-manager-78b4bc895b-jdqzs\" (UID: \"81afdf1a-a8f8-4f69-8824-192bcf14424c\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.339889 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.343523 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-xfhfs" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.346474 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jm85\" (UniqueName: \"kubernetes.io/projected/0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd-kube-api-access-6jm85\") pod \"cinder-operator-controller-manager-859b6ccc6-cmgp5\" (UID: \"0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.355780 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.367688 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.369018 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.371377 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-pkfk9" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.372912 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.374304 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.391934 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-779js" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.399683 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.400310 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k8bm\" (UniqueName: \"kubernetes.io/projected/29c0443d-0d08-4708-b268-07ae28680e01-kube-api-access-8k8bm\") pod \"heat-operator-controller-manager-5f64f6f8bb-v9rs2\" (UID: \"29c0443d-0d08-4708-b268-07ae28680e01\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.400357 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.400407 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxp8j\" (UniqueName: \"kubernetes.io/projected/96a424c4-d4f3-49c2-94a3-20d236cb207d-kube-api-access-vxp8j\") pod \"keystone-operator-controller-manager-546d4bdf48-b2rlk\" (UID: \"96a424c4-d4f3-49c2-94a3-20d236cb207d\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.400445 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6bjl\" (UniqueName: \"kubernetes.io/projected/edf52fa0-02fe-49d3-8368-fe26598027ec-kube-api-access-n6bjl\") pod \"ironic-operator-controller-manager-6c548fd776-2mb85\" (UID: \"edf52fa0-02fe-49d3-8368-fe26598027ec\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.400482 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdjg\" (UniqueName: \"kubernetes.io/projected/7ce83127-45e9-4a96-8815-538f3bde77ed-kube-api-access-npdjg\") pod \"glance-operator-controller-manager-668d9c48b9-4gjhw\" (UID: \"7ce83127-45e9-4a96-8815-538f3bde77ed\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.400516 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgd7b\" (UniqueName: \"kubernetes.io/projected/a6e93136-e20e-4070-ae0d-db82c3d2b464-kube-api-access-xgd7b\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.400542 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvd8\" (UniqueName: \"kubernetes.io/projected/d2a4ddee-42a4-451d-9bd7-3028e4680d47-kube-api-access-kqvd8\") pod \"horizon-operator-controller-manager-68c6d99b8f-cwb2d\" (UID: \"d2a4ddee-42a4-451d-9bd7-3028e4680d47\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.407029 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.407360 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.444739 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.450684 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdjg\" (UniqueName: \"kubernetes.io/projected/7ce83127-45e9-4a96-8815-538f3bde77ed-kube-api-access-npdjg\") pod \"glance-operator-controller-manager-668d9c48b9-4gjhw\" (UID: \"7ce83127-45e9-4a96-8815-538f3bde77ed\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.480453 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k8bm\" (UniqueName: \"kubernetes.io/projected/29c0443d-0d08-4708-b268-07ae28680e01-kube-api-access-8k8bm\") pod \"heat-operator-controller-manager-5f64f6f8bb-v9rs2\" (UID: \"29c0443d-0d08-4708-b268-07ae28680e01\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.487118 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.496113 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.497204 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.501762 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6bjl\" (UniqueName: \"kubernetes.io/projected/edf52fa0-02fe-49d3-8368-fe26598027ec-kube-api-access-n6bjl\") pod \"ironic-operator-controller-manager-6c548fd776-2mb85\" (UID: \"edf52fa0-02fe-49d3-8368-fe26598027ec\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.502013 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgd7b\" (UniqueName: \"kubernetes.io/projected/a6e93136-e20e-4070-ae0d-db82c3d2b464-kube-api-access-xgd7b\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.502695 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvd8\" (UniqueName: \"kubernetes.io/projected/d2a4ddee-42a4-451d-9bd7-3028e4680d47-kube-api-access-kqvd8\") pod \"horizon-operator-controller-manager-68c6d99b8f-cwb2d\" (UID: \"d2a4ddee-42a4-451d-9bd7-3028e4680d47\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.502844 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.502947 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxp8j\" (UniqueName: \"kubernetes.io/projected/96a424c4-d4f3-49c2-94a3-20d236cb207d-kube-api-access-vxp8j\") pod \"keystone-operator-controller-manager-546d4bdf48-b2rlk\" (UID: \"96a424c4-d4f3-49c2-94a3-20d236cb207d\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.503053 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs8v\" (UniqueName: \"kubernetes.io/projected/08635026-10f5-4929-b9f5-b5d6fcac6d28-kube-api-access-jhs8v\") pod \"manila-operator-controller-manager-6546668bfd-v9g26\" (UID: \"08635026-10f5-4929-b9f5-b5d6fcac6d28\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.501877 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dl7v4" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.503604 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr"] Nov 29 07:33:49 crc kubenswrapper[4660]: E1129 07:33:49.503778 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:49 crc kubenswrapper[4660]: E1129 07:33:49.503895 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert podName:a6e93136-e20e-4070-ae0d-db82c3d2b464 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:50.003878145 +0000 UTC m=+1120.557408044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert") pod "infra-operator-controller-manager-57548d458d-vrqgm" (UID: "a6e93136-e20e-4070-ae0d-db82c3d2b464") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.510083 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.532122 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-54jph" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.541243 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgd7b\" (UniqueName: \"kubernetes.io/projected/a6e93136-e20e-4070-ae0d-db82c3d2b464-kube-api-access-xgd7b\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.544285 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.553144 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.572877 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxp8j\" (UniqueName: \"kubernetes.io/projected/96a424c4-d4f3-49c2-94a3-20d236cb207d-kube-api-access-vxp8j\") pod \"keystone-operator-controller-manager-546d4bdf48-b2rlk\" (UID: \"96a424c4-d4f3-49c2-94a3-20d236cb207d\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.580807 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.581789 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6bjl\" (UniqueName: \"kubernetes.io/projected/edf52fa0-02fe-49d3-8368-fe26598027ec-kube-api-access-n6bjl\") pod \"ironic-operator-controller-manager-6c548fd776-2mb85\" (UID: \"edf52fa0-02fe-49d3-8368-fe26598027ec\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.585089 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvd8\" (UniqueName: \"kubernetes.io/projected/d2a4ddee-42a4-451d-9bd7-3028e4680d47-kube-api-access-kqvd8\") pod \"horizon-operator-controller-manager-68c6d99b8f-cwb2d\" (UID: \"d2a4ddee-42a4-451d-9bd7-3028e4680d47\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.596850 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.600864 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.603967 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqh9n\" (UniqueName: \"kubernetes.io/projected/c0579e8a-66e1-4b7c-aaf8-435d07e6e98d-kube-api-access-xqh9n\") pod \"mariadb-operator-controller-manager-56bbcc9d85-7446l\" (UID: \"c0579e8a-66e1-4b7c-aaf8-435d07e6e98d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.604015 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjv5\" (UniqueName: \"kubernetes.io/projected/b191bd3e-cd1b-43c8-99c4-54701a29dfda-kube-api-access-fcjv5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-8cnzr\" (UID: \"b191bd3e-cd1b-43c8-99c4-54701a29dfda\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.604112 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhs8v\" (UniqueName: \"kubernetes.io/projected/08635026-10f5-4929-b9f5-b5d6fcac6d28-kube-api-access-jhs8v\") pod \"manila-operator-controller-manager-6546668bfd-v9g26\" (UID: \"08635026-10f5-4929-b9f5-b5d6fcac6d28\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.611759 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.612759 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.619120 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-52pl8" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.642231 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhs8v\" (UniqueName: \"kubernetes.io/projected/08635026-10f5-4929-b9f5-b5d6fcac6d28-kube-api-access-jhs8v\") pod \"manila-operator-controller-manager-6546668bfd-v9g26\" (UID: \"08635026-10f5-4929-b9f5-b5d6fcac6d28\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.643814 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.665550 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.666903 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.680284 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w6lzp" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.691268 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.735679 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.785204 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkprg\" (UniqueName: \"kubernetes.io/projected/2badc2b5-6bdb-44b6-8d54-f8763fe78fd6-kube-api-access-tkprg\") pod \"octavia-operator-controller-manager-998648c74-6c2m6\" (UID: \"2badc2b5-6bdb-44b6-8d54-f8763fe78fd6\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.785267 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8ph\" (UniqueName: \"kubernetes.io/projected/1688cfe7-0002-4b5c-916b-ca18c9519de3-kube-api-access-kj8ph\") pod \"nova-operator-controller-manager-697bc559fc-t82nj\" (UID: \"1688cfe7-0002-4b5c-916b-ca18c9519de3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.785390 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqh9n\" (UniqueName: \"kubernetes.io/projected/c0579e8a-66e1-4b7c-aaf8-435d07e6e98d-kube-api-access-xqh9n\") pod \"mariadb-operator-controller-manager-56bbcc9d85-7446l\" (UID: \"c0579e8a-66e1-4b7c-aaf8-435d07e6e98d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.785458 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjv5\" (UniqueName: \"kubernetes.io/projected/b191bd3e-cd1b-43c8-99c4-54701a29dfda-kube-api-access-fcjv5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-8cnzr\" (UID: \"b191bd3e-cd1b-43c8-99c4-54701a29dfda\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.793499 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.850325 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.889007 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.890009 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.890081 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.901399 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cjnkl" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.901728 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.906456 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkprg\" (UniqueName: \"kubernetes.io/projected/2badc2b5-6bdb-44b6-8d54-f8763fe78fd6-kube-api-access-tkprg\") pod \"octavia-operator-controller-manager-998648c74-6c2m6\" (UID: \"2badc2b5-6bdb-44b6-8d54-f8763fe78fd6\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.906489 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8ph\" (UniqueName: \"kubernetes.io/projected/1688cfe7-0002-4b5c-916b-ca18c9519de3-kube-api-access-kj8ph\") pod \"nova-operator-controller-manager-697bc559fc-t82nj\" (UID: \"1688cfe7-0002-4b5c-916b-ca18c9519de3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.920709 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjv5\" (UniqueName: \"kubernetes.io/projected/b191bd3e-cd1b-43c8-99c4-54701a29dfda-kube-api-access-fcjv5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-8cnzr\" (UID: \"b191bd3e-cd1b-43c8-99c4-54701a29dfda\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.950283 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkprg\" (UniqueName: \"kubernetes.io/projected/2badc2b5-6bdb-44b6-8d54-f8763fe78fd6-kube-api-access-tkprg\") pod \"octavia-operator-controller-manager-998648c74-6c2m6\" (UID: \"2badc2b5-6bdb-44b6-8d54-f8763fe78fd6\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.950791 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8ph\" (UniqueName: \"kubernetes.io/projected/1688cfe7-0002-4b5c-916b-ca18c9519de3-kube-api-access-kj8ph\") pod \"nova-operator-controller-manager-697bc559fc-t82nj\" (UID: \"1688cfe7-0002-4b5c-916b-ca18c9519de3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.954801 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqh9n\" (UniqueName: \"kubernetes.io/projected/c0579e8a-66e1-4b7c-aaf8-435d07e6e98d-kube-api-access-xqh9n\") pod \"mariadb-operator-controller-manager-56bbcc9d85-7446l\" (UID: \"c0579e8a-66e1-4b7c-aaf8-435d07e6e98d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.966966 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.981902 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s"] Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.982971 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" Nov 29 07:33:49 crc kubenswrapper[4660]: I1129 07:33:49.987136 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l5vx2" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.000079 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.017822 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.017915 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.017951 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfv69\" (UniqueName: \"kubernetes.io/projected/02680922-54f1-494d-a32d-e01b82b9cfd2-kube-api-access-sfv69\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.018101 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.018146 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert podName:a6e93136-e20e-4070-ae0d-db82c3d2b464 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:51.018131479 +0000 UTC m=+1121.571661378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert") pod "infra-operator-controller-manager-57548d458d-vrqgm" (UID: "a6e93136-e20e-4070-ae0d-db82c3d2b464") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.052695 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-95ndx"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.054665 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.061254 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zj4tx" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.061640 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.082587 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.086329 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.096298 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-c4xdq" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.124485 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.124536 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfv69\" (UniqueName: \"kubernetes.io/projected/02680922-54f1-494d-a32d-e01b82b9cfd2-kube-api-access-sfv69\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.124608 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mgwh\" (UniqueName: \"kubernetes.io/projected/eb02d6d1-14c5-409f-8c54-60e35f909a84-kube-api-access-5mgwh\") pod \"ovn-operator-controller-manager-b6456fdb6-z5n6s\" (UID: \"eb02d6d1-14c5-409f-8c54-60e35f909a84\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.124819 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.124908 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert podName:02680922-54f1-494d-a32d-e01b82b9cfd2 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:50.624886687 +0000 UTC m=+1121.178416656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" (UID: "02680922-54f1-494d-a32d-e01b82b9cfd2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.124940 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-95ndx"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.156980 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.160521 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfv69\" (UniqueName: \"kubernetes.io/projected/02680922-54f1-494d-a32d-e01b82b9cfd2-kube-api-access-sfv69\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.185910 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.194913 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.196162 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.202703 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8l9hj" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.211569 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-mw22w"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.224454 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.225302 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-466mt\" (UniqueName: \"kubernetes.io/projected/d56ee9fc-8151-4442-b491-1e5c8faf48c4-kube-api-access-466mt\") pod \"placement-operator-controller-manager-78f8948974-95ndx\" (UID: \"d56ee9fc-8151-4442-b491-1e5c8faf48c4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.225408 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mgwh\" (UniqueName: \"kubernetes.io/projected/eb02d6d1-14c5-409f-8c54-60e35f909a84-kube-api-access-5mgwh\") pod \"ovn-operator-controller-manager-b6456fdb6-z5n6s\" (UID: \"eb02d6d1-14c5-409f-8c54-60e35f909a84\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.225445 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96fl\" (UniqueName: \"kubernetes.io/projected/e512b840-83f6-47dc-b5ed-669807cc2878-kube-api-access-f96fl\") pod \"swift-operator-controller-manager-5f8c65bbfc-724c7\" (UID: \"e512b840-83f6-47dc-b5ed-669807cc2878\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.246727 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2rl8f" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.260534 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.280162 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mgwh\" (UniqueName: \"kubernetes.io/projected/eb02d6d1-14c5-409f-8c54-60e35f909a84-kube-api-access-5mgwh\") pod \"ovn-operator-controller-manager-b6456fdb6-z5n6s\" (UID: \"eb02d6d1-14c5-409f-8c54-60e35f909a84\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.297494 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.328917 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-466mt\" (UniqueName: \"kubernetes.io/projected/d56ee9fc-8151-4442-b491-1e5c8faf48c4-kube-api-access-466mt\") pod \"placement-operator-controller-manager-78f8948974-95ndx\" (UID: \"d56ee9fc-8151-4442-b491-1e5c8faf48c4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.328980 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfv9q\" (UniqueName: \"kubernetes.io/projected/e0c70c45-673e-47e6-80cd-99bbfbe6e695-kube-api-access-bfv9q\") pod \"test-operator-controller-manager-5854674fcc-mw22w\" (UID: \"e0c70c45-673e-47e6-80cd-99bbfbe6e695\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.329039 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwft\" (UniqueName: \"kubernetes.io/projected/01080af3-022a-430c-a9cc-b9b98f5214de-kube-api-access-gxwft\") pod \"telemetry-operator-controller-manager-76cc84c6bb-4zn9g\" (UID: \"01080af3-022a-430c-a9cc-b9b98f5214de\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.329076 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96fl\" (UniqueName: \"kubernetes.io/projected/e512b840-83f6-47dc-b5ed-669807cc2878-kube-api-access-f96fl\") pod \"swift-operator-controller-manager-5f8c65bbfc-724c7\" (UID: \"e512b840-83f6-47dc-b5ed-669807cc2878\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.330404 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-mw22w"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.341600 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.371365 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96fl\" (UniqueName: \"kubernetes.io/projected/e512b840-83f6-47dc-b5ed-669807cc2878-kube-api-access-f96fl\") pod \"swift-operator-controller-manager-5f8c65bbfc-724c7\" (UID: \"e512b840-83f6-47dc-b5ed-669807cc2878\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.394851 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-466mt\" (UniqueName: \"kubernetes.io/projected/d56ee9fc-8151-4442-b491-1e5c8faf48c4-kube-api-access-466mt\") pod \"placement-operator-controller-manager-78f8948974-95ndx\" (UID: \"d56ee9fc-8151-4442-b491-1e5c8faf48c4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.395767 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.396890 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.419302 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.419571 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.425947 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-h79wx" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.430764 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwft\" (UniqueName: \"kubernetes.io/projected/01080af3-022a-430c-a9cc-b9b98f5214de-kube-api-access-gxwft\") pod \"telemetry-operator-controller-manager-76cc84c6bb-4zn9g\" (UID: \"01080af3-022a-430c-a9cc-b9b98f5214de\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.430863 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfv9q\" (UniqueName: \"kubernetes.io/projected/e0c70c45-673e-47e6-80cd-99bbfbe6e695-kube-api-access-bfv9q\") pod \"test-operator-controller-manager-5854674fcc-mw22w\" (UID: \"e0c70c45-673e-47e6-80cd-99bbfbe6e695\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.454561 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.469231 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.480342 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfv9q\" (UniqueName: \"kubernetes.io/projected/e0c70c45-673e-47e6-80cd-99bbfbe6e695-kube-api-access-bfv9q\") pod \"test-operator-controller-manager-5854674fcc-mw22w\" (UID: \"e0c70c45-673e-47e6-80cd-99bbfbe6e695\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.491759 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.491931 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.492304 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jz8rv" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.494435 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwft\" (UniqueName: \"kubernetes.io/projected/01080af3-022a-430c-a9cc-b9b98f5214de-kube-api-access-gxwft\") pod \"telemetry-operator-controller-manager-76cc84c6bb-4zn9g\" (UID: \"01080af3-022a-430c-a9cc-b9b98f5214de\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.502674 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.505943 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.507246 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.521730 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-5mm9m" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.533103 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8cvt\" (UniqueName: \"kubernetes.io/projected/4747fced-480f-4185-b4e3-2dedd7f05614-kube-api-access-k8cvt\") pod \"watcher-operator-controller-manager-769dc69bc-7hsm2\" (UID: \"4747fced-480f-4185-b4e3-2dedd7f05614\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.540948 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.559249 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.606219 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.682812 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnklw\" (UniqueName: \"kubernetes.io/projected/9ee27942-cb74-4ee0-b4b9-9f995b6604a4-kube-api-access-mnklw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8vp89\" (UID: \"9ee27942-cb74-4ee0-b4b9-9f995b6604a4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.682872 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxx2\" (UniqueName: \"kubernetes.io/projected/e676373b-cd82-4455-ae35-62c31e458d5d-kube-api-access-xlxx2\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.682926 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8cvt\" (UniqueName: \"kubernetes.io/projected/4747fced-480f-4185-b4e3-2dedd7f05614-kube-api-access-k8cvt\") pod \"watcher-operator-controller-manager-769dc69bc-7hsm2\" (UID: \"4747fced-480f-4185-b4e3-2dedd7f05614\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.682947 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.683008 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.683165 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.684260 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.684314 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert podName:02680922-54f1-494d-a32d-e01b82b9cfd2 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:51.684296559 +0000 UTC m=+1122.237826448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" (UID: "02680922-54f1-494d-a32d-e01b82b9cfd2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.690636 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.722520 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8cvt\" (UniqueName: \"kubernetes.io/projected/4747fced-480f-4185-b4e3-2dedd7f05614-kube-api-access-k8cvt\") pod \"watcher-operator-controller-manager-769dc69bc-7hsm2\" (UID: \"4747fced-480f-4185-b4e3-2dedd7f05614\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.785694 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnklw\" (UniqueName: \"kubernetes.io/projected/9ee27942-cb74-4ee0-b4b9-9f995b6604a4-kube-api-access-mnklw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8vp89\" (UID: \"9ee27942-cb74-4ee0-b4b9-9f995b6604a4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.785737 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxx2\" (UniqueName: \"kubernetes.io/projected/e676373b-cd82-4455-ae35-62c31e458d5d-kube-api-access-xlxx2\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.785795 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.785842 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.786795 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.786837 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:51.286822141 +0000 UTC m=+1121.840352040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "webhook-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.786941 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: E1129 07:33:50.786988 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:51.286972225 +0000 UTC m=+1121.840502124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "metrics-server-cert" not found Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.810306 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnklw\" (UniqueName: \"kubernetes.io/projected/9ee27942-cb74-4ee0-b4b9-9f995b6604a4-kube-api-access-mnklw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8vp89\" (UID: \"9ee27942-cb74-4ee0-b4b9-9f995b6604a4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.824668 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxx2\" (UniqueName: \"kubernetes.io/projected/e676373b-cd82-4455-ae35-62c31e458d5d-kube-api-access-xlxx2\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.841886 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.842311 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.871541 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs"] Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.880709 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.882431 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5"] Nov 29 07:33:50 crc kubenswrapper[4660]: W1129 07:33:50.893846 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f7f5fdc_8dd7_40cb_88cd_3fd3830101dd.slice/crio-1f8d04bf05c6644a2ea640bf2541a87ece702543a81e023daa98b7b0394d11a2 WatchSource:0}: Error finding container 1f8d04bf05c6644a2ea640bf2541a87ece702543a81e023daa98b7b0394d11a2: Status 404 returned error can't find the container with id 1f8d04bf05c6644a2ea640bf2541a87ece702543a81e023daa98b7b0394d11a2 Nov 29 07:33:50 crc kubenswrapper[4660]: I1129 07:33:50.913829 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.090352 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.090523 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.090593 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert podName:a6e93136-e20e-4070-ae0d-db82c3d2b464 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:53.09056375 +0000 UTC m=+1123.644093649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert") pod "infra-operator-controller-manager-57548d458d-vrqgm" (UID: "a6e93136-e20e-4070-ae0d-db82c3d2b464") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.293473 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.293557 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.293743 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.293832 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:52.293806474 +0000 UTC m=+1122.847336403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "webhook-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.293752 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.293898 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:52.293886046 +0000 UTC m=+1122.847415985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "metrics-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: W1129 07:33:51.392960 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedf52fa0_02fe_49d3_8368_fe26598027ec.slice/crio-93fe08bc865f7f65c7acb32eb57279c0ff3ee25c0bfbbeb36d844e8d8ca47e0a WatchSource:0}: Error finding container 93fe08bc865f7f65c7acb32eb57279c0ff3ee25c0bfbbeb36d844e8d8ca47e0a: Status 404 returned error can't find the container with id 93fe08bc865f7f65c7acb32eb57279c0ff3ee25c0bfbbeb36d844e8d8ca47e0a Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.402144 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.456723 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.463169 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.473531 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.481895 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.490415 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.500030 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.508056 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.512750 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.518757 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.523172 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l"] Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.526858 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kj8ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-t82nj_openstack-operators(1688cfe7-0002-4b5c-916b-ca18c9519de3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.528376 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kj8ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-t82nj_openstack-operators(1688cfe7-0002-4b5c-916b-ca18c9519de3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.529843 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" podUID="1688cfe7-0002-4b5c-916b-ca18c9519de3" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.605203 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-mw22w"] Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.613731 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfv9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-mw22w_openstack-operators(e0c70c45-673e-47e6-80cd-99bbfbe6e695): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.623019 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g"] Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.623299 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfv9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-mw22w_openstack-operators(e0c70c45-673e-47e6-80cd-99bbfbe6e695): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.624905 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" podUID="e0c70c45-673e-47e6-80cd-99bbfbe6e695" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.624966 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" event={"ID":"29c0443d-0d08-4708-b268-07ae28680e01","Type":"ContainerStarted","Data":"56489ee5153d87602efd6916e6a2017e073199071f3dd7e441b85c17d346ac43"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.625907 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" event={"ID":"0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd","Type":"ContainerStarted","Data":"1f8d04bf05c6644a2ea640bf2541a87ece702543a81e023daa98b7b0394d11a2"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.627630 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" event={"ID":"d2a4ddee-42a4-451d-9bd7-3028e4680d47","Type":"ContainerStarted","Data":"9e616900ff512547986b7909e4832cfca54bb2fb8810dcf0546d88bfd40ae097"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.628559 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" event={"ID":"96a424c4-d4f3-49c2-94a3-20d236cb207d","Type":"ContainerStarted","Data":"237c0cd69149200d1d2c727cbf267889d34537f0e70dc8ee27cb2619af04cf89"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.629716 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" event={"ID":"eb02d6d1-14c5-409f-8c54-60e35f909a84","Type":"ContainerStarted","Data":"fbc5fbd4d4322279f57c1531d5e5bbbd0f34752d4f47f4003226266b7cda73dc"} Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.634345 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxwft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-4zn9g_openstack-operators(01080af3-022a-430c-a9cc-b9b98f5214de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.634518 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" event={"ID":"81afdf1a-a8f8-4f69-8824-192bcf14424c","Type":"ContainerStarted","Data":"9e308a3c276e1c3acfc5926050376fe178768e833ca345e589677625a8e7e587"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.636476 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" event={"ID":"08635026-10f5-4929-b9f5-b5d6fcac6d28","Type":"ContainerStarted","Data":"c324826004e6b85a409dbd4a9eb41d88744f886a6076dc6051577697cbd5463d"} Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.641489 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxwft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-4zn9g_openstack-operators(01080af3-022a-430c-a9cc-b9b98f5214de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.641914 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" event={"ID":"7ce83127-45e9-4a96-8815-538f3bde77ed","Type":"ContainerStarted","Data":"5cab8a0e2eb9f5a8d90d300ea00c277a58d64cfaf0f7a8519fcd6e83d2052e47"} Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.643065 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" podUID="01080af3-022a-430c-a9cc-b9b98f5214de" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.649159 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" event={"ID":"2badc2b5-6bdb-44b6-8d54-f8763fe78fd6","Type":"ContainerStarted","Data":"03572c8f5a18304292dc8102155dee57e6d180a665f185fdcf3bce329227db3b"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.658026 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" event={"ID":"b191bd3e-cd1b-43c8-99c4-54701a29dfda","Type":"ContainerStarted","Data":"606003f23219170c4e7a3fd7833c4dcbf57a302bff9a7314037130611e53caf6"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.659236 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" event={"ID":"f0b999b3-e302-40ca-a1aa-5173b5655498","Type":"ContainerStarted","Data":"dba4c4c6ee42a56cd727473305182fd8f538ad55e9c763ad52e90bbff6a5a6c9"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.661426 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-95ndx"] Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.661548 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" event={"ID":"edf52fa0-02fe-49d3-8368-fe26598027ec","Type":"ContainerStarted","Data":"93fe08bc865f7f65c7acb32eb57279c0ff3ee25c0bfbbeb36d844e8d8ca47e0a"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.662436 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" event={"ID":"c0579e8a-66e1-4b7c-aaf8-435d07e6e98d","Type":"ContainerStarted","Data":"8dcab5d6ddb42ba03cd79db160916ecbca6022e6c63c746b9b6a3625dfe74399"} Nov 29 07:33:51 crc kubenswrapper[4660]: W1129 07:33:51.663823 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode512b840_83f6_47dc_b5ed_669807cc2878.slice/crio-7bd1134e6cd2e6894a3c65d451380a7eea0b7abbf5a259f624c94042c03e24d1 WatchSource:0}: Error finding container 7bd1134e6cd2e6894a3c65d451380a7eea0b7abbf5a259f624c94042c03e24d1: Status 404 returned error can't find the container with id 7bd1134e6cd2e6894a3c65d451380a7eea0b7abbf5a259f624c94042c03e24d1 Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.664578 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" event={"ID":"1688cfe7-0002-4b5c-916b-ca18c9519de3","Type":"ContainerStarted","Data":"33b0aa1c25cda754b5929c135d05e99aa7908d41e0f4a3dc39d8e48c8344da02"} Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.668024 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7"] Nov 29 07:33:51 crc kubenswrapper[4660]: W1129 07:33:51.668471 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56ee9fc_8151_4442_b491_1e5c8faf48c4.slice/crio-1f8046f3d3ed2e1e5c2c85996c2657eb41fecac5972565a3152f5d9f3c153a7c WatchSource:0}: Error finding container 1f8046f3d3ed2e1e5c2c85996c2657eb41fecac5972565a3152f5d9f3c153a7c: Status 404 returned error can't find the container with id 1f8046f3d3ed2e1e5c2c85996c2657eb41fecac5972565a3152f5d9f3c153a7c Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.669027 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f96fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-724c7_openstack-operators(e512b840-83f6-47dc-b5ed-669807cc2878): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.670226 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" podUID="1688cfe7-0002-4b5c-916b-ca18c9519de3" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.671134 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f96fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-724c7_openstack-operators(e512b840-83f6-47dc-b5ed-669807cc2878): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.672223 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" podUID="e512b840-83f6-47dc-b5ed-669807cc2878" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.672848 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-466mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-95ndx_openstack-operators(d56ee9fc-8151-4442-b491-1e5c8faf48c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.674890 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-466mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-95ndx_openstack-operators(d56ee9fc-8151-4442-b491-1e5c8faf48c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.676012 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" podUID="d56ee9fc-8151-4442-b491-1e5c8faf48c4" Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.704586 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.704893 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.704959 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert podName:02680922-54f1-494d-a32d-e01b82b9cfd2 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:53.704923669 +0000 UTC m=+1124.258453568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" (UID: "02680922-54f1-494d-a32d-e01b82b9cfd2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.724180 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2"] Nov 29 07:33:51 crc kubenswrapper[4660]: W1129 07:33:51.728241 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4747fced_480f_4185_b4e3_2dedd7f05614.slice/crio-f0720248a166697c2602b5db4ad65f31c1aed8e841841b5479923ebce8afceeb WatchSource:0}: Error finding container f0720248a166697c2602b5db4ad65f31c1aed8e841841b5479923ebce8afceeb: Status 404 returned error can't find the container with id f0720248a166697c2602b5db4ad65f31c1aed8e841841b5479923ebce8afceeb Nov 29 07:33:51 crc kubenswrapper[4660]: W1129 07:33:51.730600 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ee27942_cb74_4ee0_b4b9_9f995b6604a4.slice/crio-19379aa8051aa07c03fa3c5849784506e9cbc9dd5bb0debb4180bfc0e18f1ac8 WatchSource:0}: Error finding container 19379aa8051aa07c03fa3c5849784506e9cbc9dd5bb0debb4180bfc0e18f1ac8: Status 404 returned error can't find the container with id 19379aa8051aa07c03fa3c5849784506e9cbc9dd5bb0debb4180bfc0e18f1ac8 Nov 29 07:33:51 crc kubenswrapper[4660]: I1129 07:33:51.732986 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89"] Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.733394 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnklw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8vp89_openstack-operators(9ee27942-cb74-4ee0-b4b9-9f995b6604a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 29 07:33:51 crc kubenswrapper[4660]: E1129 07:33:51.734693 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" podUID="9ee27942-cb74-4ee0-b4b9-9f995b6604a4" Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.315977 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.316292 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.316832 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.316947 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:54.316927183 +0000 UTC m=+1124.870457082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "metrics-server-cert" not found Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.317391 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.317483 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:54.317462108 +0000 UTC m=+1124.870992057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "webhook-server-cert" not found Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.678567 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" event={"ID":"01080af3-022a-430c-a9cc-b9b98f5214de","Type":"ContainerStarted","Data":"f43ecad7e34a619f6b9228f90f130f99ad242772fed2cf0b65ba408dae7113d3"} Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.681388 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" event={"ID":"e512b840-83f6-47dc-b5ed-669807cc2878","Type":"ContainerStarted","Data":"7bd1134e6cd2e6894a3c65d451380a7eea0b7abbf5a259f624c94042c03e24d1"} Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.684588 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" podUID="01080af3-022a-430c-a9cc-b9b98f5214de" Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.685445 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" event={"ID":"d56ee9fc-8151-4442-b491-1e5c8faf48c4","Type":"ContainerStarted","Data":"1f8046f3d3ed2e1e5c2c85996c2657eb41fecac5972565a3152f5d9f3c153a7c"} Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.687268 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" podUID="e512b840-83f6-47dc-b5ed-669807cc2878" Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.690687 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" podUID="d56ee9fc-8151-4442-b491-1e5c8faf48c4" Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.696702 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" event={"ID":"9ee27942-cb74-4ee0-b4b9-9f995b6604a4","Type":"ContainerStarted","Data":"19379aa8051aa07c03fa3c5849784506e9cbc9dd5bb0debb4180bfc0e18f1ac8"} Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.699497 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" podUID="9ee27942-cb74-4ee0-b4b9-9f995b6604a4" Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.708509 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" event={"ID":"4747fced-480f-4185-b4e3-2dedd7f05614","Type":"ContainerStarted","Data":"f0720248a166697c2602b5db4ad65f31c1aed8e841841b5479923ebce8afceeb"} Nov 29 07:33:52 crc kubenswrapper[4660]: I1129 07:33:52.769279 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" event={"ID":"e0c70c45-673e-47e6-80cd-99bbfbe6e695","Type":"ContainerStarted","Data":"086d6536ecfc0836b1e7ec420fcee1ba7357e2d9e3a20433b2ea771c2eb5aa3e"} Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.782428 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" podUID="1688cfe7-0002-4b5c-916b-ca18c9519de3" Nov 29 07:33:52 crc kubenswrapper[4660]: E1129 07:33:52.782544 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" podUID="e0c70c45-673e-47e6-80cd-99bbfbe6e695" Nov 29 07:33:53 crc kubenswrapper[4660]: I1129 07:33:53.129219 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.129387 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.129440 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert podName:a6e93136-e20e-4070-ae0d-db82c3d2b464 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:57.129426185 +0000 UTC m=+1127.682956074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert") pod "infra-operator-controller-manager-57548d458d-vrqgm" (UID: "a6e93136-e20e-4070-ae0d-db82c3d2b464") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:53 crc kubenswrapper[4660]: I1129 07:33:53.737468 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.737637 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.737695 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert podName:02680922-54f1-494d-a32d-e01b82b9cfd2 nodeName:}" failed. No retries permitted until 2025-11-29 07:33:57.737680566 +0000 UTC m=+1128.291210465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" (UID: "02680922-54f1-494d-a32d-e01b82b9cfd2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.784214 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" podUID="9ee27942-cb74-4ee0-b4b9-9f995b6604a4" Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.786246 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" podUID="d56ee9fc-8151-4442-b491-1e5c8faf48c4" Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.787129 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" podUID="e512b840-83f6-47dc-b5ed-669807cc2878" Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.787829 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" podUID="01080af3-022a-430c-a9cc-b9b98f5214de" Nov 29 07:33:53 crc kubenswrapper[4660]: E1129 07:33:53.790339 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" podUID="e0c70c45-673e-47e6-80cd-99bbfbe6e695" Nov 29 07:33:54 crc kubenswrapper[4660]: I1129 07:33:54.352384 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:54 crc kubenswrapper[4660]: I1129 07:33:54.352532 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:54 crc kubenswrapper[4660]: E1129 07:33:54.352687 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:33:54 crc kubenswrapper[4660]: E1129 07:33:54.352750 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:58.352730073 +0000 UTC m=+1128.906259972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "webhook-server-cert" not found Nov 29 07:33:54 crc kubenswrapper[4660]: E1129 07:33:54.352941 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:33:54 crc kubenswrapper[4660]: E1129 07:33:54.353135 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:33:58.353117684 +0000 UTC m=+1128.906647583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "metrics-server-cert" not found Nov 29 07:33:57 crc kubenswrapper[4660]: I1129 07:33:57.193194 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:33:57 crc kubenswrapper[4660]: E1129 07:33:57.193401 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:57 crc kubenswrapper[4660]: E1129 07:33:57.193897 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert podName:a6e93136-e20e-4070-ae0d-db82c3d2b464 nodeName:}" failed. No retries permitted until 2025-11-29 07:34:05.193850416 +0000 UTC m=+1135.747380325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert") pod "infra-operator-controller-manager-57548d458d-vrqgm" (UID: "a6e93136-e20e-4070-ae0d-db82c3d2b464") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:33:57 crc kubenswrapper[4660]: I1129 07:33:57.800817 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:33:57 crc kubenswrapper[4660]: E1129 07:33:57.801493 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:57 crc kubenswrapper[4660]: E1129 07:33:57.801689 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert podName:02680922-54f1-494d-a32d-e01b82b9cfd2 nodeName:}" failed. No retries permitted until 2025-11-29 07:34:05.801660054 +0000 UTC m=+1136.355190013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" (UID: "02680922-54f1-494d-a32d-e01b82b9cfd2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 29 07:33:58 crc kubenswrapper[4660]: I1129 07:33:58.409481 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:58 crc kubenswrapper[4660]: I1129 07:33:58.409567 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:33:58 crc kubenswrapper[4660]: E1129 07:33:58.409713 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:33:58 crc kubenswrapper[4660]: E1129 07:33:58.409753 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:33:58 crc kubenswrapper[4660]: E1129 07:33:58.409791 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:34:06.409773981 +0000 UTC m=+1136.963303880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "webhook-server-cert" not found Nov 29 07:33:58 crc kubenswrapper[4660]: E1129 07:33:58.409810 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:34:06.409801552 +0000 UTC m=+1136.963331441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "metrics-server-cert" not found Nov 29 07:34:04 crc kubenswrapper[4660]: E1129 07:34:04.507102 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Nov 29 07:34:04 crc kubenswrapper[4660]: E1129 07:34:04.507643 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhs8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-v9g26_openstack-operators(08635026-10f5-4929-b9f5-b5d6fcac6d28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:05 crc kubenswrapper[4660]: I1129 07:34:05.209347 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:34:05 crc kubenswrapper[4660]: E1129 07:34:05.209515 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 29 07:34:05 crc kubenswrapper[4660]: E1129 07:34:05.209570 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert podName:a6e93136-e20e-4070-ae0d-db82c3d2b464 nodeName:}" failed. No retries permitted until 2025-11-29 07:34:21.209556024 +0000 UTC m=+1151.763085923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert") pod "infra-operator-controller-manager-57548d458d-vrqgm" (UID: "a6e93136-e20e-4070-ae0d-db82c3d2b464") : secret "infra-operator-webhook-server-cert" not found Nov 29 07:34:05 crc kubenswrapper[4660]: E1129 07:34:05.534865 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Nov 29 07:34:05 crc kubenswrapper[4660]: E1129 07:34:05.535438 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6bjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-2mb85_openstack-operators(edf52fa0-02fe-49d3-8368-fe26598027ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:05 crc kubenswrapper[4660]: I1129 07:34:05.817819 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:34:05 crc kubenswrapper[4660]: I1129 07:34:05.823800 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02680922-54f1-494d-a32d-e01b82b9cfd2-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd49blbh\" (UID: \"02680922-54f1-494d-a32d-e01b82b9cfd2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:34:05 crc kubenswrapper[4660]: I1129 07:34:05.863941 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:34:06 crc kubenswrapper[4660]: I1129 07:34:06.424746 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:34:06 crc kubenswrapper[4660]: I1129 07:34:06.425112 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:34:06 crc kubenswrapper[4660]: E1129 07:34:06.424972 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 29 07:34:06 crc kubenswrapper[4660]: E1129 07:34:06.425236 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:34:22.425208691 +0000 UTC m=+1152.978738610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "webhook-server-cert" not found Nov 29 07:34:06 crc kubenswrapper[4660]: E1129 07:34:06.425356 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 29 07:34:06 crc kubenswrapper[4660]: E1129 07:34:06.425455 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs podName:e676373b-cd82-4455-ae35-62c31e458d5d nodeName:}" failed. No retries permitted until 2025-11-29 07:34:22.425429908 +0000 UTC m=+1152.978959807 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs") pod "openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" (UID: "e676373b-cd82-4455-ae35-62c31e458d5d") : secret "metrics-server-cert" not found Nov 29 07:34:15 crc kubenswrapper[4660]: E1129 07:34:15.622114 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" Nov 29 07:34:15 crc kubenswrapper[4660]: E1129 07:34:15.623106 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jm85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-cmgp5_openstack-operators(0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:17 crc kubenswrapper[4660]: E1129 07:34:17.345079 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172" Nov 29 07:34:17 crc kubenswrapper[4660]: E1129 07:34:17.345670 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-npdjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-4gjhw_openstack-operators(7ce83127-45e9-4a96-8815-538f3bde77ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:21 crc kubenswrapper[4660]: I1129 07:34:21.257285 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:34:21 crc kubenswrapper[4660]: I1129 07:34:21.267686 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6e93136-e20e-4070-ae0d-db82c3d2b464-cert\") pod \"infra-operator-controller-manager-57548d458d-vrqgm\" (UID: \"a6e93136-e20e-4070-ae0d-db82c3d2b464\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:34:21 crc kubenswrapper[4660]: I1129 07:34:21.490066 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wvsw8" Nov 29 07:34:21 crc kubenswrapper[4660]: I1129 07:34:21.499015 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:34:22 crc kubenswrapper[4660]: I1129 07:34:22.477345 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:34:22 crc kubenswrapper[4660]: I1129 07:34:22.477495 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:34:22 crc kubenswrapper[4660]: I1129 07:34:22.484173 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-webhook-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:34:22 crc kubenswrapper[4660]: I1129 07:34:22.484847 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e676373b-cd82-4455-ae35-62c31e458d5d-metrics-certs\") pod \"openstack-operator-controller-manager-7fb5f7cfbf-7dwbm\" (UID: \"e676373b-cd82-4455-ae35-62c31e458d5d\") " pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:34:22 crc kubenswrapper[4660]: I1129 07:34:22.670923 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jz8rv" Nov 29 07:34:22 crc kubenswrapper[4660]: I1129 07:34:22.679519 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:34:22 crc kubenswrapper[4660]: E1129 07:34:22.782044 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Nov 29 07:34:22 crc kubenswrapper[4660]: E1129 07:34:22.782275 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8cvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-7hsm2_openstack-operators(4747fced-480f-4185-b4e3-2dedd7f05614): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:30 crc kubenswrapper[4660]: I1129 07:34:30.600811 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-szl5x" podUID="05fec9d8-e898-467e-9938-33ce089b3d15" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:34:35 crc kubenswrapper[4660]: I1129 07:34:35.500998 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:34:35 crc kubenswrapper[4660]: I1129 07:34:35.501667 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:34:40 crc kubenswrapper[4660]: I1129 07:34:40.599794 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-szl5x" podUID="05fec9d8-e898-467e-9938-33ce089b3d15" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:34:40 crc kubenswrapper[4660]: E1129 07:34:40.728706 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Nov 29 07:34:40 crc kubenswrapper[4660]: E1129 07:34:40.728911 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kqvd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-cwb2d_openstack-operators(d2a4ddee-42a4-451d-9bd7-3028e4680d47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:43 crc kubenswrapper[4660]: E1129 07:34:43.979470 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Nov 29 07:34:43 crc kubenswrapper[4660]: E1129 07:34:43.980025 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkprg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-6c2m6_openstack-operators(2badc2b5-6bdb-44b6-8d54-f8763fe78fd6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:44 crc kubenswrapper[4660]: E1129 07:34:44.568143 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Nov 29 07:34:44 crc kubenswrapper[4660]: E1129 07:34:44.568564 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mgwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-z5n6s_openstack-operators(eb02d6d1-14c5-409f-8c54-60e35f909a84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:45 crc kubenswrapper[4660]: E1129 07:34:45.200843 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Nov 29 07:34:45 crc kubenswrapper[4660]: E1129 07:34:45.201040 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fcjv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-8cnzr_openstack-operators(b191bd3e-cd1b-43c8-99c4-54701a29dfda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:45 crc kubenswrapper[4660]: E1129 07:34:45.807833 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Nov 29 07:34:45 crc kubenswrapper[4660]: E1129 07:34:45.808097 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-b2rlk_openstack-operators(96a424c4-d4f3-49c2-94a3-20d236cb207d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:47 crc kubenswrapper[4660]: E1129 07:34:47.588936 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Nov 29 07:34:47 crc kubenswrapper[4660]: E1129 07:34:47.589200 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxwft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-4zn9g_openstack-operators(01080af3-022a-430c-a9cc-b9b98f5214de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:48 crc kubenswrapper[4660]: E1129 07:34:48.141209 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Nov 29 07:34:48 crc kubenswrapper[4660]: E1129 07:34:48.141658 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfv9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-mw22w_openstack-operators(e0c70c45-673e-47e6-80cd-99bbfbe6e695): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.682746 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.683068 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jm85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-cmgp5_openstack-operators(0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.684242 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" podUID="0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.693959 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.694107 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhs8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-v9g26_openstack-operators(08635026-10f5-4929-b9f5-b5d6fcac6d28): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.695267 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" podUID="08635026-10f5-4929-b9f5-b5d6fcac6d28" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.698252 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.698536 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f96fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-724c7_openstack-operators(e512b840-83f6-47dc-b5ed-669807cc2878): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.752028 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.752310 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6bjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-2mb85_openstack-operators(edf52fa0-02fe-49d3-8368-fe26598027ec): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 29 07:34:51 crc kubenswrapper[4660]: E1129 07:34:51.753538 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" podUID="edf52fa0-02fe-49d3-8368-fe26598027ec" Nov 29 07:34:52 crc kubenswrapper[4660]: E1129 07:34:52.419032 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Nov 29 07:34:52 crc kubenswrapper[4660]: E1129 07:34:52.420152 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kj8ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-t82nj_openstack-operators(1688cfe7-0002-4b5c-916b-ca18c9519de3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:53 crc kubenswrapper[4660]: E1129 07:34:53.781229 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Nov 29 07:34:53 crc kubenswrapper[4660]: E1129 07:34:53.781715 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-466mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-95ndx_openstack-operators(d56ee9fc-8151-4442-b491-1e5c8faf48c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:53 crc kubenswrapper[4660]: E1129 07:34:53.781403 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:34:53 crc kubenswrapper[4660]: E1129 07:34:53.782230 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-npdjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-4gjhw_openstack-operators(7ce83127-45e9-4a96-8815-538f3bde77ed): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 29 07:34:53 crc kubenswrapper[4660]: E1129 07:34:53.783598 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" podUID="7ce83127-45e9-4a96-8815-538f3bde77ed" Nov 29 07:34:54 crc kubenswrapper[4660]: E1129 07:34:54.438664 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 29 07:34:54 crc kubenswrapper[4660]: E1129 07:34:54.438871 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnklw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8vp89_openstack-operators(9ee27942-cb74-4ee0-b4b9-9f995b6604a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:34:54 crc kubenswrapper[4660]: E1129 07:34:54.440249 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" podUID="9ee27942-cb74-4ee0-b4b9-9f995b6604a4" Nov 29 07:34:54 crc kubenswrapper[4660]: I1129 07:34:54.967773 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh"] Nov 29 07:34:54 crc kubenswrapper[4660]: I1129 07:34:54.979146 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm"] Nov 29 07:34:54 crc kubenswrapper[4660]: I1129 07:34:54.985862 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm"] Nov 29 07:34:59 crc kubenswrapper[4660]: I1129 07:34:56.226947 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" event={"ID":"a6e93136-e20e-4070-ae0d-db82c3d2b464","Type":"ContainerStarted","Data":"c2bf8097fa92f613fe3e566ad6269ca89e6f3f75488680bb619d1cc653440c5a"} Nov 29 07:34:59 crc kubenswrapper[4660]: I1129 07:34:56.228030 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" event={"ID":"02680922-54f1-494d-a32d-e01b82b9cfd2","Type":"ContainerStarted","Data":"9072d40d641cd6ea7c066ae3402785ed27db3a6c96c7e295f5fbc2273a9550a8"} Nov 29 07:34:59 crc kubenswrapper[4660]: I1129 07:34:56.229759 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" event={"ID":"e676373b-cd82-4455-ae35-62c31e458d5d","Type":"ContainerStarted","Data":"9d0edee719832f37d65c0dec9d9964d642fcbc852369108bffd6a85d9681cd53"} Nov 29 07:35:01 crc kubenswrapper[4660]: I1129 07:35:01.315096 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" event={"ID":"81afdf1a-a8f8-4f69-8824-192bcf14424c","Type":"ContainerStarted","Data":"448984fd6eba13e4f5423bac76509879a66df1331233a48cb3c0260ba870653b"} Nov 29 07:35:02 crc kubenswrapper[4660]: I1129 07:35:02.331685 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" event={"ID":"c0579e8a-66e1-4b7c-aaf8-435d07e6e98d","Type":"ContainerStarted","Data":"181d87feb3dd0e07611fa131124d70b1dfd1c02f00e2642497f14c43460de59f"} Nov 29 07:35:02 crc kubenswrapper[4660]: I1129 07:35:02.333571 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" event={"ID":"29c0443d-0d08-4708-b268-07ae28680e01","Type":"ContainerStarted","Data":"a5d3f169667e25b4d5a3fc4c72299b13e9ca794be8a78a74be0376f6830867d2"} Nov 29 07:35:03 crc kubenswrapper[4660]: I1129 07:35:03.355594 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" event={"ID":"f0b999b3-e302-40ca-a1aa-5173b5655498","Type":"ContainerStarted","Data":"b182f3022b43ae0a85673fb9d3f48218c0643120565a453d86903b938406e402"} Nov 29 07:35:03 crc kubenswrapper[4660]: I1129 07:35:03.357776 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" event={"ID":"e676373b-cd82-4455-ae35-62c31e458d5d","Type":"ContainerStarted","Data":"a0ece62da7822352a0116ac0958f9a03d2b9acdc8a3ca6192e52ff813e7abd23"} Nov 29 07:35:03 crc kubenswrapper[4660]: I1129 07:35:03.358978 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:35:03 crc kubenswrapper[4660]: I1129 07:35:03.391952 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" podStartSLOduration=73.39192657 podStartE2EDuration="1m13.39192657s" podCreationTimestamp="2025-11-29 07:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:35:03.387901928 +0000 UTC m=+1193.941431837" watchObservedRunningTime="2025-11-29 07:35:03.39192657 +0000 UTC m=+1193.945456469" Nov 29 07:35:04 crc kubenswrapper[4660]: E1129 07:35:04.697295 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" podUID="9ee27942-cb74-4ee0-b4b9-9f995b6604a4" Nov 29 07:35:05 crc kubenswrapper[4660]: I1129 07:35:05.500448 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:35:05 crc kubenswrapper[4660]: I1129 07:35:05.500504 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:35:06 crc kubenswrapper[4660]: E1129 07:35:06.561423 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 29 07:35:06 crc kubenswrapper[4660]: E1129 07:35:06.561685 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8cvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-7hsm2_openstack-operators(4747fced-480f-4185-b4e3-2dedd7f05614): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:35:06 crc kubenswrapper[4660]: E1129 07:35:06.562887 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" podUID="4747fced-480f-4185-b4e3-2dedd7f05614" Nov 29 07:35:09 crc kubenswrapper[4660]: I1129 07:35:09.414558 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" event={"ID":"0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd","Type":"ContainerStarted","Data":"23c387582d737917753cec42ef3697361b0e23622e2724a9977eecec8cce818f"} Nov 29 07:35:09 crc kubenswrapper[4660]: I1129 07:35:09.423235 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" event={"ID":"edf52fa0-02fe-49d3-8368-fe26598027ec","Type":"ContainerStarted","Data":"42ebe036b9aef94ad20b898f64f447724056de10384ed348533e53ae19028a55"} Nov 29 07:35:09 crc kubenswrapper[4660]: I1129 07:35:09.425079 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" event={"ID":"08635026-10f5-4929-b9f5-b5d6fcac6d28","Type":"ContainerStarted","Data":"8652b3fdfb4c9daee96ee4cd024243cf3f798f46508715c6feb542b1bc491676"} Nov 29 07:35:09 crc kubenswrapper[4660]: I1129 07:35:09.426519 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" event={"ID":"7ce83127-45e9-4a96-8815-538f3bde77ed","Type":"ContainerStarted","Data":"cb4fdac54f57382992aecc0c13b6f327bd3576970f9ec75278add684d42368ba"} Nov 29 07:35:09 crc kubenswrapper[4660]: E1129 07:35:09.595089 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" podUID="b191bd3e-cd1b-43c8-99c4-54701a29dfda" Nov 29 07:35:09 crc kubenswrapper[4660]: E1129 07:35:09.737479 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" podUID="2badc2b5-6bdb-44b6-8d54-f8763fe78fd6" Nov 29 07:35:10 crc kubenswrapper[4660]: E1129 07:35:10.020022 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" podUID="01080af3-022a-430c-a9cc-b9b98f5214de" Nov 29 07:35:10 crc kubenswrapper[4660]: E1129 07:35:10.126045 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" podUID="d2a4ddee-42a4-451d-9bd7-3028e4680d47" Nov 29 07:35:10 crc kubenswrapper[4660]: E1129 07:35:10.154894 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" podUID="96a424c4-d4f3-49c2-94a3-20d236cb207d" Nov 29 07:35:10 crc kubenswrapper[4660]: E1129 07:35:10.297506 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" podUID="e0c70c45-673e-47e6-80cd-99bbfbe6e695" Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.447959 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" event={"ID":"a6e93136-e20e-4070-ae0d-db82c3d2b464","Type":"ContainerStarted","Data":"d7623ebfcfb5571f6f1ebb960ece602008ae5c843b17a832256d9ac72733d59b"} Nov 29 07:35:10 crc kubenswrapper[4660]: E1129 07:35:10.452061 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" podUID="e512b840-83f6-47dc-b5ed-669807cc2878" Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.463047 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" event={"ID":"e0c70c45-673e-47e6-80cd-99bbfbe6e695","Type":"ContainerStarted","Data":"e9adcfbf260f981443bf2956146492a6a17a83b89af9fcedc1e08239be87dde4"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.488444 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" event={"ID":"02680922-54f1-494d-a32d-e01b82b9cfd2","Type":"ContainerStarted","Data":"0e97e4bba6a4d8283f2044dbec014ccbc1a3e90c6af8538ceb6e529dda8a15e5"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.524178 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" event={"ID":"96a424c4-d4f3-49c2-94a3-20d236cb207d","Type":"ContainerStarted","Data":"b672243737544685a460b6e8bbd999c76af945c2740638bf3144302cc5164924"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.534423 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" event={"ID":"b191bd3e-cd1b-43c8-99c4-54701a29dfda","Type":"ContainerStarted","Data":"2a0f893435c0bd976ea99b7e3e898cf6c6b205368150d5ed3d04a7e8783dc495"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.600917 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" event={"ID":"81afdf1a-a8f8-4f69-8824-192bcf14424c","Type":"ContainerStarted","Data":"54934d0f627854552fd9ad1e3c49f267202fde5b96c491dc07d0100c0373a8b1"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.603058 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.607930 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.623414 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" event={"ID":"edf52fa0-02fe-49d3-8368-fe26598027ec","Type":"ContainerStarted","Data":"c7150ab1f80cf5e9ae90fa95c7446c5f49ab8b8e3821381fc1a7d803646ac2bd"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.624175 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.642131 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jdqzs" podStartSLOduration=3.375671236 podStartE2EDuration="1m21.642111452s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:50.945989077 +0000 UTC m=+1121.499518976" lastFinishedPulling="2025-11-29 07:35:09.212429293 +0000 UTC m=+1199.765959192" observedRunningTime="2025-11-29 07:35:10.641552266 +0000 UTC m=+1201.195082165" watchObservedRunningTime="2025-11-29 07:35:10.642111452 +0000 UTC m=+1201.195641351" Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.648858 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" event={"ID":"2badc2b5-6bdb-44b6-8d54-f8763fe78fd6","Type":"ContainerStarted","Data":"4decce004405e8a9f4930df1cd73be1515970762e0e4362f72e79bbb64a57d45"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.658383 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" event={"ID":"d2a4ddee-42a4-451d-9bd7-3028e4680d47","Type":"ContainerStarted","Data":"ad5ffb4e1f73a1b243dbdb741cd6032456de9238f57271fe319115aeb689090c"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.673068 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" event={"ID":"01080af3-022a-430c-a9cc-b9b98f5214de","Type":"ContainerStarted","Data":"4659e7d1ead02a93fbe6c752c5b689d855b59538f95425348f53dd6ee1763b49"} Nov 29 07:35:10 crc kubenswrapper[4660]: I1129 07:35:10.719061 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" podStartSLOduration=11.29245654 podStartE2EDuration="1m21.719046247s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.423662791 +0000 UTC m=+1121.977192690" lastFinishedPulling="2025-11-29 07:35:01.850252498 +0000 UTC m=+1192.403782397" observedRunningTime="2025-11-29 07:35:10.718756489 +0000 UTC m=+1201.272286388" watchObservedRunningTime="2025-11-29 07:35:10.719046247 +0000 UTC m=+1201.272576146" Nov 29 07:35:11 crc kubenswrapper[4660]: E1129 07:35:11.064592 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" podUID="eb02d6d1-14c5-409f-8c54-60e35f909a84" Nov 29 07:35:11 crc kubenswrapper[4660]: E1129 07:35:11.073774 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" podUID="d56ee9fc-8151-4442-b491-1e5c8faf48c4" Nov 29 07:35:11 crc kubenswrapper[4660]: E1129 07:35:11.074102 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" podUID="1688cfe7-0002-4b5c-916b-ca18c9519de3" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.679901 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" event={"ID":"4747fced-480f-4185-b4e3-2dedd7f05614","Type":"ContainerStarted","Data":"185a8249ed04250b648348e4abcb190537bb43143156eb7d0c178dca5b39cbe8"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.679942 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" event={"ID":"4747fced-480f-4185-b4e3-2dedd7f05614","Type":"ContainerStarted","Data":"50d26def7f026165a6057b2c16635c7a6ed177229e6dba64ca09280a46b204c9"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.681347 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" event={"ID":"08635026-10f5-4929-b9f5-b5d6fcac6d28","Type":"ContainerStarted","Data":"8bd5139cd0af6d1e30cd7f46476f28cfa96f0dff4f583da02f738414fecb8bb4"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.682212 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.683676 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" event={"ID":"7ce83127-45e9-4a96-8815-538f3bde77ed","Type":"ContainerStarted","Data":"68b2978f64771f488f826ab3e90d06d03e4ca5e12ccfdf84db441a8ba49b04cb"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.684014 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.685231 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" event={"ID":"29c0443d-0d08-4708-b268-07ae28680e01","Type":"ContainerStarted","Data":"ad6c192a9641198661b1aace2ece9d838205a6a384d329d7cc634acfb86efd2e"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.686666 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.690375 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" event={"ID":"1688cfe7-0002-4b5c-916b-ca18c9519de3","Type":"ContainerStarted","Data":"ddac1b17ce91f34305d5fbc475ed873daa35e62636c35e4aac276cc348184aca"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.691313 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702728 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702753 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702763 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" event={"ID":"a6e93136-e20e-4070-ae0d-db82c3d2b464","Type":"ContainerStarted","Data":"61b778f477a6cf22d6f58f1074031054e2606f7d48695d3dca7756dc6048c0fd"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702776 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" event={"ID":"e512b840-83f6-47dc-b5ed-669807cc2878","Type":"ContainerStarted","Data":"64d98df5659b81b5413304464c8c9d1a967c1eadd4786cc56ae03724a5c02871"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702788 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" event={"ID":"0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd","Type":"ContainerStarted","Data":"ad59d596019e7b0182874a38f9642be1beddb3f70a0c3d848996bfc7d70a1ec9"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702798 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702806 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" event={"ID":"d56ee9fc-8151-4442-b491-1e5c8faf48c4","Type":"ContainerStarted","Data":"9681fa3aa5fc9a902cd659e1ee503d36cd99381bc83a05ae3c0fad5fdd08b706"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702814 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" event={"ID":"f0b999b3-e302-40ca-a1aa-5173b5655498","Type":"ContainerStarted","Data":"a018573c7944fa12040b08ab19db94c7d72adbd2bed3c62e14ee623618dec090"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.702875 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" event={"ID":"eb02d6d1-14c5-409f-8c54-60e35f909a84","Type":"ContainerStarted","Data":"abeb6eb9a37133f23866e4764737d3bae11b81318edb6d897605652d20c00cd9"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.704724 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" event={"ID":"c0579e8a-66e1-4b7c-aaf8-435d07e6e98d","Type":"ContainerStarted","Data":"5619d8518ea0a50f14ef49e756c217b563edc9d71e1336cd4b5893155ed67092"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.705231 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.705261 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.708693 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" event={"ID":"02680922-54f1-494d-a32d-e01b82b9cfd2","Type":"ContainerStarted","Data":"7abd4acfc1fbf0ce4113f74f2dfde35ee462d4712a8b33c62ab18b39492c08b4"} Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.708962 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.710345 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.724801 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" podStartSLOduration=12.381739817 podStartE2EDuration="1m22.724782066s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.506087528 +0000 UTC m=+1122.059617427" lastFinishedPulling="2025-11-29 07:35:01.849129777 +0000 UTC m=+1192.402659676" observedRunningTime="2025-11-29 07:35:11.714572874 +0000 UTC m=+1202.268102773" watchObservedRunningTime="2025-11-29 07:35:11.724782066 +0000 UTC m=+1202.278311965" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.736130 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" podStartSLOduration=7.56658997 podStartE2EDuration="1m22.736113478s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.408356818 +0000 UTC m=+1121.961886717" lastFinishedPulling="2025-11-29 07:35:06.577880326 +0000 UTC m=+1197.131410225" observedRunningTime="2025-11-29 07:35:11.731487861 +0000 UTC m=+1202.285017760" watchObservedRunningTime="2025-11-29 07:35:11.736113478 +0000 UTC m=+1202.289643377" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.856743 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" podStartSLOduration=69.552682357 podStartE2EDuration="1m22.85672801s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:34:55.775649154 +0000 UTC m=+1186.329179053" lastFinishedPulling="2025-11-29 07:35:09.079694807 +0000 UTC m=+1199.633224706" observedRunningTime="2025-11-29 07:35:11.851397983 +0000 UTC m=+1202.404927882" watchObservedRunningTime="2025-11-29 07:35:11.85672801 +0000 UTC m=+1202.410257909" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.906600 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7446l" podStartSLOduration=5.107407507 podStartE2EDuration="1m22.906583787s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.524237939 +0000 UTC m=+1122.077767838" lastFinishedPulling="2025-11-29 07:35:09.323414219 +0000 UTC m=+1199.876944118" observedRunningTime="2025-11-29 07:35:11.905420395 +0000 UTC m=+1202.458950324" watchObservedRunningTime="2025-11-29 07:35:11.906583787 +0000 UTC m=+1202.460113686" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.912160 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" podStartSLOduration=11.963959368 podStartE2EDuration="1m22.912137871s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:50.902050514 +0000 UTC m=+1121.455580413" lastFinishedPulling="2025-11-29 07:35:01.850229017 +0000 UTC m=+1192.403758916" observedRunningTime="2025-11-29 07:35:11.889091404 +0000 UTC m=+1202.442621313" watchObservedRunningTime="2025-11-29 07:35:11.912137871 +0000 UTC m=+1202.465667770" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.950955 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v9rs2" podStartSLOduration=4.881051146 podStartE2EDuration="1m22.950937453s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.424058632 +0000 UTC m=+1121.977588521" lastFinishedPulling="2025-11-29 07:35:09.493944929 +0000 UTC m=+1200.047474828" observedRunningTime="2025-11-29 07:35:11.948012942 +0000 UTC m=+1202.501542841" watchObservedRunningTime="2025-11-29 07:35:11.950937453 +0000 UTC m=+1202.504467352" Nov 29 07:35:11 crc kubenswrapper[4660]: I1129 07:35:11.996026 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" podStartSLOduration=69.7023176 podStartE2EDuration="1m22.996006987s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:34:55.770036409 +0000 UTC m=+1186.323566308" lastFinishedPulling="2025-11-29 07:35:09.063725796 +0000 UTC m=+1199.617255695" observedRunningTime="2025-11-29 07:35:11.992768027 +0000 UTC m=+1202.546297926" watchObservedRunningTime="2025-11-29 07:35:11.996006987 +0000 UTC m=+1202.549536886" Nov 29 07:35:12 crc kubenswrapper[4660]: I1129 07:35:12.022532 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59d587b55-wqktr" podStartSLOduration=4.491583108 podStartE2EDuration="1m23.02251542s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:50.880423556 +0000 UTC m=+1121.433953455" lastFinishedPulling="2025-11-29 07:35:09.411355868 +0000 UTC m=+1199.964885767" observedRunningTime="2025-11-29 07:35:12.019489856 +0000 UTC m=+1202.573019755" watchObservedRunningTime="2025-11-29 07:35:12.02251542 +0000 UTC m=+1202.576045319" Nov 29 07:35:12 crc kubenswrapper[4660]: I1129 07:35:12.686441 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7fb5f7cfbf-7dwbm" Nov 29 07:35:12 crc kubenswrapper[4660]: I1129 07:35:12.717774 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" event={"ID":"96a424c4-d4f3-49c2-94a3-20d236cb207d","Type":"ContainerStarted","Data":"ec03991d23de63658bd2d65344f1e7245139fc861ee31738a342fa80c30282bc"} Nov 29 07:35:12 crc kubenswrapper[4660]: I1129 07:35:12.720989 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" event={"ID":"e0c70c45-673e-47e6-80cd-99bbfbe6e695","Type":"ContainerStarted","Data":"e2fd86fd592e7646899005fa88557bc25c8e9aaf1ae2fc12630418103df4a10f"} Nov 29 07:35:12 crc kubenswrapper[4660]: I1129 07:35:12.726414 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" Nov 29 07:35:12 crc kubenswrapper[4660]: I1129 07:35:12.767861 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" podStartSLOduration=6.157584345 podStartE2EDuration="1m23.767844196s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.729835758 +0000 UTC m=+1122.283365657" lastFinishedPulling="2025-11-29 07:35:09.340095609 +0000 UTC m=+1199.893625508" observedRunningTime="2025-11-29 07:35:12.757452679 +0000 UTC m=+1203.310982588" watchObservedRunningTime="2025-11-29 07:35:12.767844196 +0000 UTC m=+1203.321374085" Nov 29 07:35:13 crc kubenswrapper[4660]: E1129 07:35:13.511767 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" podUID="d56ee9fc-8151-4442-b491-1e5c8faf48c4" Nov 29 07:35:13 crc kubenswrapper[4660]: I1129 07:35:13.729437 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" event={"ID":"2badc2b5-6bdb-44b6-8d54-f8763fe78fd6","Type":"ContainerStarted","Data":"e31c31e2fc7b19cf74c5524f7ba7b86ba3655c0beec9fef047cfaa8dc88be6da"} Nov 29 07:35:13 crc kubenswrapper[4660]: I1129 07:35:13.753054 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" podStartSLOduration=5.031191862 podStartE2EDuration="1m24.753038657s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.463592484 +0000 UTC m=+1122.017122383" lastFinishedPulling="2025-11-29 07:35:11.185439279 +0000 UTC m=+1201.738969178" observedRunningTime="2025-11-29 07:35:13.747981287 +0000 UTC m=+1204.301511186" watchObservedRunningTime="2025-11-29 07:35:13.753038657 +0000 UTC m=+1204.306568556" Nov 29 07:35:13 crc kubenswrapper[4660]: I1129 07:35:13.774197 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" podStartSLOduration=5.277692411 podStartE2EDuration="1m24.774173751s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.613632948 +0000 UTC m=+1122.167162837" lastFinishedPulling="2025-11-29 07:35:11.110114268 +0000 UTC m=+1201.663644177" observedRunningTime="2025-11-29 07:35:13.76907833 +0000 UTC m=+1204.322608229" watchObservedRunningTime="2025-11-29 07:35:13.774173751 +0000 UTC m=+1204.327703660" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.737841 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" event={"ID":"d2a4ddee-42a4-451d-9bd7-3028e4680d47","Type":"ContainerStarted","Data":"34b46aa99599134278ab250df93b38a48e45d482e877437a62520402bc5eca0d"} Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.739213 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.740657 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" event={"ID":"01080af3-022a-430c-a9cc-b9b98f5214de","Type":"ContainerStarted","Data":"23910f81379c5f843759e4a43dcbd617d67024ec80baa2b32d878fc48e4f7db4"} Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.741132 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.743214 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" event={"ID":"b191bd3e-cd1b-43c8-99c4-54701a29dfda","Type":"ContainerStarted","Data":"be6660a0373113ac0187ca775d093b41d92bac21d3a962c05c91988f9705f652"} Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.743246 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.743361 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.757703 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" podStartSLOduration=3.680178096 podStartE2EDuration="1m25.757682686s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.491272558 +0000 UTC m=+1122.044802457" lastFinishedPulling="2025-11-29 07:35:13.568777138 +0000 UTC m=+1204.122307047" observedRunningTime="2025-11-29 07:35:14.751314521 +0000 UTC m=+1205.304844440" watchObservedRunningTime="2025-11-29 07:35:14.757682686 +0000 UTC m=+1205.311212585" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.767354 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" podStartSLOduration=5.817622353 podStartE2EDuration="1m25.767331672s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.463263984 +0000 UTC m=+1122.016793883" lastFinishedPulling="2025-11-29 07:35:11.412973303 +0000 UTC m=+1201.966503202" observedRunningTime="2025-11-29 07:35:14.766535001 +0000 UTC m=+1205.320064900" watchObservedRunningTime="2025-11-29 07:35:14.767331672 +0000 UTC m=+1205.320861571" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.787462 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" podStartSLOduration=6.007816317 podStartE2EDuration="1m25.787442968s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.634184435 +0000 UTC m=+1122.187714334" lastFinishedPulling="2025-11-29 07:35:11.413811076 +0000 UTC m=+1201.967340985" observedRunningTime="2025-11-29 07:35:14.781714059 +0000 UTC m=+1205.335243968" watchObservedRunningTime="2025-11-29 07:35:14.787442968 +0000 UTC m=+1205.340972857" Nov 29 07:35:14 crc kubenswrapper[4660]: I1129 07:35:14.808904 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" podStartSLOduration=5.915841327 podStartE2EDuration="1m25.80888319s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.519971841 +0000 UTC m=+1122.073501740" lastFinishedPulling="2025-11-29 07:35:11.413013704 +0000 UTC m=+1201.966543603" observedRunningTime="2025-11-29 07:35:14.805186668 +0000 UTC m=+1205.358716567" watchObservedRunningTime="2025-11-29 07:35:14.80888319 +0000 UTC m=+1205.362413089" Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.757392 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" event={"ID":"e512b840-83f6-47dc-b5ed-669807cc2878","Type":"ContainerStarted","Data":"ca2982afd4cdcf0c2a1a64fdbbb68091695e32d14238d2691c085c667c8af510"} Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.758030 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.760369 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" event={"ID":"eb02d6d1-14c5-409f-8c54-60e35f909a84","Type":"ContainerStarted","Data":"2e80af108c85f0acfeab6d311676625e70f705720332c2b253a2261eebd17b8d"} Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.761025 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.763596 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" event={"ID":"1688cfe7-0002-4b5c-916b-ca18c9519de3","Type":"ContainerStarted","Data":"a18fed5af142204598634082cee5fed6e7b015527963da3c9621d0a3c237cafc"} Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.764020 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.780518 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" podStartSLOduration=4.074063456 podStartE2EDuration="1m26.780500257s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.668923035 +0000 UTC m=+1122.222452934" lastFinishedPulling="2025-11-29 07:35:14.375359836 +0000 UTC m=+1204.928889735" observedRunningTime="2025-11-29 07:35:15.780006253 +0000 UTC m=+1206.333536172" watchObservedRunningTime="2025-11-29 07:35:15.780500257 +0000 UTC m=+1206.334030156" Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.806073 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" podStartSLOduration=3.956076078 podStartE2EDuration="1m26.806054283s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.526748829 +0000 UTC m=+1122.080278728" lastFinishedPulling="2025-11-29 07:35:14.376727034 +0000 UTC m=+1204.930256933" observedRunningTime="2025-11-29 07:35:15.799725377 +0000 UTC m=+1206.353255276" watchObservedRunningTime="2025-11-29 07:35:15.806054283 +0000 UTC m=+1206.359584182" Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.819823 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" podStartSLOduration=3.487485105 podStartE2EDuration="1m26.819805353s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.518319716 +0000 UTC m=+1122.071849625" lastFinishedPulling="2025-11-29 07:35:14.850639974 +0000 UTC m=+1205.404169873" observedRunningTime="2025-11-29 07:35:15.816573953 +0000 UTC m=+1206.370103862" watchObservedRunningTime="2025-11-29 07:35:15.819805353 +0000 UTC m=+1206.373335252" Nov 29 07:35:15 crc kubenswrapper[4660]: I1129 07:35:15.869653 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd49blbh" Nov 29 07:35:18 crc kubenswrapper[4660]: I1129 07:35:18.782101 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" event={"ID":"9ee27942-cb74-4ee0-b4b9-9f995b6604a4","Type":"ContainerStarted","Data":"9ebbc143caf24b7f2b19dd758a22bb309b01c9368a3c9bf6f4b832c7ad1ddcc0"} Nov 29 07:35:18 crc kubenswrapper[4660]: I1129 07:35:18.800738 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8vp89" podStartSLOduration=2.193421172 podStartE2EDuration="1m28.800723437s" podCreationTimestamp="2025-11-29 07:33:50 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.733263752 +0000 UTC m=+1122.286793651" lastFinishedPulling="2025-11-29 07:35:18.340566017 +0000 UTC m=+1208.894095916" observedRunningTime="2025-11-29 07:35:18.797765306 +0000 UTC m=+1209.351295245" watchObservedRunningTime="2025-11-29 07:35:18.800723437 +0000 UTC m=+1209.354253336" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.447486 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cmgp5" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.549280 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-4gjhw" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.603423 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-cwb2d" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.762233 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.764257 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-b2rlk" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.797740 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-2mb85" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.853898 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-v9g26" Nov 29 07:35:19 crc kubenswrapper[4660]: I1129 07:35:19.970678 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-t82nj" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.064565 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-6c2m6" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.188532 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-8cnzr" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.345013 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-z5n6s" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.423949 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-724c7" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.562686 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-4zn9g" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.608199 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.652083 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mw22w" Nov 29 07:35:20 crc kubenswrapper[4660]: I1129 07:35:20.844890 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hsm2" Nov 29 07:35:21 crc kubenswrapper[4660]: I1129 07:35:21.506287 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-vrqgm" Nov 29 07:35:27 crc kubenswrapper[4660]: I1129 07:35:27.855731 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" event={"ID":"d56ee9fc-8151-4442-b491-1e5c8faf48c4","Type":"ContainerStarted","Data":"91e4fe8f5ed204f5f3539f973ba2c6e8f3f17dce1f3d3263f1b8fd1c796fdcc4"} Nov 29 07:35:27 crc kubenswrapper[4660]: I1129 07:35:27.857650 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" Nov 29 07:35:27 crc kubenswrapper[4660]: I1129 07:35:27.879282 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" podStartSLOduration=3.136080979 podStartE2EDuration="1m38.879260422s" podCreationTimestamp="2025-11-29 07:33:49 +0000 UTC" firstStartedPulling="2025-11-29 07:33:51.67270725 +0000 UTC m=+1122.226237149" lastFinishedPulling="2025-11-29 07:35:27.415886693 +0000 UTC m=+1217.969416592" observedRunningTime="2025-11-29 07:35:27.876188837 +0000 UTC m=+1218.429718736" watchObservedRunningTime="2025-11-29 07:35:27.879260422 +0000 UTC m=+1218.432790321" Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.500890 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.501504 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.501601 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.502728 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd511a85552f8f6a0486302ddd3dd88b243fb575cbf96f9f78b0be146b756d4a"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.502865 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://bd511a85552f8f6a0486302ddd3dd88b243fb575cbf96f9f78b0be146b756d4a" gracePeriod=600 Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.916051 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="bd511a85552f8f6a0486302ddd3dd88b243fb575cbf96f9f78b0be146b756d4a" exitCode=0 Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.916254 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"bd511a85552f8f6a0486302ddd3dd88b243fb575cbf96f9f78b0be146b756d4a"} Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.916440 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"7722213ef79c3c66cb7ac343ca03425de7ecbfb47f9db3895575925b4ea79e47"} Nov 29 07:35:35 crc kubenswrapper[4660]: I1129 07:35:35.916462 4660 scope.go:117] "RemoveContainer" containerID="dcd84865061a683fd99b3d22cec95cee8b6991ac454110033b3fc10f47f460b1" Nov 29 07:35:40 crc kubenswrapper[4660]: I1129 07:35:40.694379 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-95ndx" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.699129 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6t8b8"] Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.701201 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.703752 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.703967 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.704082 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.704736 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8lfpx" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.706859 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.713489 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6t8b8"] Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.843017 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-config\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.843232 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsc4s\" (UniqueName: \"kubernetes.io/projected/4403a2a7-6d17-44b3-9837-52f8817cd43e-kube-api-access-tsc4s\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.843280 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.944501 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-config\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.944560 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsc4s\" (UniqueName: \"kubernetes.io/projected/4403a2a7-6d17-44b3-9837-52f8817cd43e-kube-api-access-tsc4s\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.944596 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.945374 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.945878 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-config\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:54 crc kubenswrapper[4660]: I1129 07:35:54.975127 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsc4s\" (UniqueName: \"kubernetes.io/projected/4403a2a7-6d17-44b3-9837-52f8817cd43e-kube-api-access-tsc4s\") pod \"dnsmasq-dns-78dd6ddcc-6t8b8\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:55 crc kubenswrapper[4660]: I1129 07:35:55.039078 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:35:55 crc kubenswrapper[4660]: I1129 07:35:55.521584 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6t8b8"] Nov 29 07:35:56 crc kubenswrapper[4660]: I1129 07:35:56.070781 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" event={"ID":"4403a2a7-6d17-44b3-9837-52f8817cd43e","Type":"ContainerStarted","Data":"4061a82d69318258b38d775ea0b5a19d05639b27964008d253b8b3a3c3c7b8b6"} Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.541715 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcvwn"] Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.542816 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.559227 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcvwn"] Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.607890 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.607982 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz929\" (UniqueName: \"kubernetes.io/projected/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-kube-api-access-bz929\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.608003 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-config\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.708999 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.709754 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.710094 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz929\" (UniqueName: \"kubernetes.io/projected/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-kube-api-access-bz929\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.710117 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-config\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.711389 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-config\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.738517 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz929\" (UniqueName: \"kubernetes.io/projected/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-kube-api-access-bz929\") pod \"dnsmasq-dns-666b6646f7-wcvwn\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.859523 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.941658 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6t8b8"] Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.985846 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4ng99"] Nov 29 07:35:57 crc kubenswrapper[4660]: I1129 07:35:57.987158 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.013964 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.014208 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk5df\" (UniqueName: \"kubernetes.io/projected/71c939cc-c281-4adf-b76d-8b31680a500b-kube-api-access-rk5df\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.014315 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-config\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.022427 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4ng99"] Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.115420 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-config\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.115800 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.115901 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk5df\" (UniqueName: \"kubernetes.io/projected/71c939cc-c281-4adf-b76d-8b31680a500b-kube-api-access-rk5df\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.116342 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-config\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.116964 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.155534 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk5df\" (UniqueName: \"kubernetes.io/projected/71c939cc-c281-4adf-b76d-8b31680a500b-kube-api-access-rk5df\") pod \"dnsmasq-dns-57d769cc4f-4ng99\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.305075 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.637460 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcvwn"] Nov 29 07:35:58 crc kubenswrapper[4660]: W1129 07:35:58.645063 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43057c2f_d0e6_444c_9486_1fa1b4ad03c8.slice/crio-d15f836334416a94254e574798184f967b7388d1a8d4cbc3c4ba941fe2b228a1 WatchSource:0}: Error finding container d15f836334416a94254e574798184f967b7388d1a8d4cbc3c4ba941fe2b228a1: Status 404 returned error can't find the container with id d15f836334416a94254e574798184f967b7388d1a8d4cbc3c4ba941fe2b228a1 Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.722324 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.723921 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.728458 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.728757 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-g9twm" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.728858 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.728958 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.729940 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.730873 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.733645 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.738545 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.796403 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4ng99"] Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832124 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832171 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffz4b\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-kube-api-access-ffz4b\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832212 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-config-data\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832248 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832278 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832308 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832335 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a408d44-6909-4748-9b8e-72da66b0afea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832353 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a408d44-6909-4748-9b8e-72da66b0afea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832381 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832667 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.832713 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934240 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934280 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934302 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffz4b\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-kube-api-access-ffz4b\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934332 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-config-data\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934360 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934388 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934410 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934430 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a408d44-6909-4748-9b8e-72da66b0afea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934447 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a408d44-6909-4748-9b8e-72da66b0afea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934469 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934491 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.934957 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.935378 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.935958 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.936030 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-config-data\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.936768 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.936941 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.940448 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.940943 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a408d44-6909-4748-9b8e-72da66b0afea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.941032 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.941478 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a408d44-6909-4748-9b8e-72da66b0afea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.953917 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffz4b\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-kube-api-access-ffz4b\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:58 crc kubenswrapper[4660]: I1129 07:35:58.961808 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " pod="openstack/rabbitmq-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.095820 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" event={"ID":"43057c2f-d0e6-444c-9486-1fa1b4ad03c8","Type":"ContainerStarted","Data":"d15f836334416a94254e574798184f967b7388d1a8d4cbc3c4ba941fe2b228a1"} Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.096941 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" event={"ID":"71c939cc-c281-4adf-b76d-8b31680a500b","Type":"ContainerStarted","Data":"abf055bc1a645cb96f0c7e9a9883bb6650f1c76e11e349b10ff69d974c67d009"} Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.145194 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.150004 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.151148 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.155559 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.157051 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.157334 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.158163 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.158281 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fczw9" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.158387 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.158559 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.163232 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.338861 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339160 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0604115a-3f3a-4061-bb63-ada6ebb5d458-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339185 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339220 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339238 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339252 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339287 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339306 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc99\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-kube-api-access-rhc99\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339431 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0604115a-3f3a-4061-bb63-ada6ebb5d458-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339553 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.339597 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440475 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440517 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440538 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440557 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhc99\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-kube-api-access-rhc99\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440577 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0604115a-3f3a-4061-bb63-ada6ebb5d458-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440628 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440651 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440695 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440719 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0604115a-3f3a-4061-bb63-ada6ebb5d458-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440738 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.440768 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.441492 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.441568 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.442282 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.442584 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.442752 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.448395 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0604115a-3f3a-4061-bb63-ada6ebb5d458-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.449361 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.449438 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.451883 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0604115a-3f3a-4061-bb63-ada6ebb5d458-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.460696 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.474003 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhc99\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-kube-api-access-rhc99\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.527432 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.633746 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:35:59 crc kubenswrapper[4660]: W1129 07:35:59.642834 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a408d44_6909_4748_9b8e_72da66b0afea.slice/crio-aec1ffb48dea3abbf284d0dd849ee8b69ea922c7b4d83837abc2e8b63e6bd3b7 WatchSource:0}: Error finding container aec1ffb48dea3abbf284d0dd849ee8b69ea922c7b4d83837abc2e8b63e6bd3b7: Status 404 returned error can't find the container with id aec1ffb48dea3abbf284d0dd849ee8b69ea922c7b4d83837abc2e8b63e6bd3b7 Nov 29 07:35:59 crc kubenswrapper[4660]: I1129 07:35:59.805682 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.100986 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.118081 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a408d44-6909-4748-9b8e-72da66b0afea","Type":"ContainerStarted","Data":"aec1ffb48dea3abbf284d0dd849ee8b69ea922c7b4d83837abc2e8b63e6bd3b7"} Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.570989 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.572746 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.574968 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.575009 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.575727 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.575987 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7gxqs" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.589634 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.599733 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758340 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758391 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758455 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758480 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758510 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758534 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758551 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnm6\" (UniqueName: \"kubernetes.io/projected/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-kube-api-access-gwnm6\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.758571 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860397 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860445 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwnm6\" (UniqueName: \"kubernetes.io/projected/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-kube-api-access-gwnm6\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860471 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860521 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860544 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860591 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860628 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.860666 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.861314 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.861582 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.861645 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.862076 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.862540 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.870683 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.884253 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwnm6\" (UniqueName: \"kubernetes.io/projected/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-kube-api-access-gwnm6\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.884796 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:00 crc kubenswrapper[4660]: I1129 07:36:00.897497 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910\") " pod="openstack/openstack-galera-0" Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.124537 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0604115a-3f3a-4061-bb63-ada6ebb5d458","Type":"ContainerStarted","Data":"7a0482640c44c50ba843984db17eefcdab81d00dd75b0a5cac65e01a6a1bc8a4"} Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.196728 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.650364 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 29 07:36:01 crc kubenswrapper[4660]: W1129 07:36:01.657770 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb90d2bf_1b0e_4d18_9bff_2d9adb8e3910.slice/crio-1743e7a38c4e23d073ef67935a69087cfd42805ba8ee3adf8772b17cd7fbf34a WatchSource:0}: Error finding container 1743e7a38c4e23d073ef67935a69087cfd42805ba8ee3adf8772b17cd7fbf34a: Status 404 returned error can't find the container with id 1743e7a38c4e23d073ef67935a69087cfd42805ba8ee3adf8772b17cd7fbf34a Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.949058 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.950531 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.956513 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.960694 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.960893 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.961007 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hw7f6" Nov 29 07:36:01 crc kubenswrapper[4660]: I1129 07:36:01.961121 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.082916 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.082971 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.082993 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96sbl\" (UniqueName: \"kubernetes.io/projected/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-kube-api-access-96sbl\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.083049 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.083095 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.083125 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.083156 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.083195 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.133501 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910","Type":"ContainerStarted","Data":"1743e7a38c4e23d073ef67935a69087cfd42805ba8ee3adf8772b17cd7fbf34a"} Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185103 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185167 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185196 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185218 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185271 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185287 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185311 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96sbl\" (UniqueName: \"kubernetes.io/projected/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-kube-api-access-96sbl\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.185361 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.186358 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.186376 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.186770 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.187673 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.187691 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.191126 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.196889 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.219327 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96sbl\" (UniqueName: \"kubernetes.io/projected/4a1c83c7-2cac-4b54-90c4-080b7f50cd7f-kube-api-access-96sbl\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.264976 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f\") " pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.280756 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.347513 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.348497 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.353974 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.354297 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tjtnx" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.354302 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.359286 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.498476 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cmc\" (UniqueName: \"kubernetes.io/projected/46c3b1d2-02f5-4632-bf44-648754c2e83c-kube-api-access-d2cmc\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.498867 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/46c3b1d2-02f5-4632-bf44-648754c2e83c-kolla-config\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.498900 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c3b1d2-02f5-4632-bf44-648754c2e83c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.498950 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c3b1d2-02f5-4632-bf44-648754c2e83c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.498972 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c3b1d2-02f5-4632-bf44-648754c2e83c-config-data\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.599983 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c3b1d2-02f5-4632-bf44-648754c2e83c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.600025 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c3b1d2-02f5-4632-bf44-648754c2e83c-config-data\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.600052 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2cmc\" (UniqueName: \"kubernetes.io/projected/46c3b1d2-02f5-4632-bf44-648754c2e83c-kube-api-access-d2cmc\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.600119 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/46c3b1d2-02f5-4632-bf44-648754c2e83c-kolla-config\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.600145 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c3b1d2-02f5-4632-bf44-648754c2e83c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.601130 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c3b1d2-02f5-4632-bf44-648754c2e83c-config-data\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.601181 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/46c3b1d2-02f5-4632-bf44-648754c2e83c-kolla-config\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.610292 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c3b1d2-02f5-4632-bf44-648754c2e83c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.618885 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c3b1d2-02f5-4632-bf44-648754c2e83c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.621763 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2cmc\" (UniqueName: \"kubernetes.io/projected/46c3b1d2-02f5-4632-bf44-648754c2e83c-kube-api-access-d2cmc\") pod \"memcached-0\" (UID: \"46c3b1d2-02f5-4632-bf44-648754c2e83c\") " pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.699709 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 29 07:36:02 crc kubenswrapper[4660]: I1129 07:36:02.853480 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 29 07:36:02 crc kubenswrapper[4660]: W1129 07:36:02.861167 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1c83c7_2cac_4b54_90c4_080b7f50cd7f.slice/crio-87435abdc97f7476e4c66ba0564e6d3b1484759020a15c68009c383408856665 WatchSource:0}: Error finding container 87435abdc97f7476e4c66ba0564e6d3b1484759020a15c68009c383408856665: Status 404 returned error can't find the container with id 87435abdc97f7476e4c66ba0564e6d3b1484759020a15c68009c383408856665 Nov 29 07:36:03 crc kubenswrapper[4660]: I1129 07:36:03.145747 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f","Type":"ContainerStarted","Data":"87435abdc97f7476e4c66ba0564e6d3b1484759020a15c68009c383408856665"} Nov 29 07:36:03 crc kubenswrapper[4660]: I1129 07:36:03.159198 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.156407 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"46c3b1d2-02f5-4632-bf44-648754c2e83c","Type":"ContainerStarted","Data":"87bed47889eb108e8409b330314d6ce2901f221007d4a3d1e876e0171dfd002b"} Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.426094 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.427091 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.429778 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xcl2b" Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.446170 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.529822 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db584\" (UniqueName: \"kubernetes.io/projected/d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e-kube-api-access-db584\") pod \"kube-state-metrics-0\" (UID: \"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e\") " pod="openstack/kube-state-metrics-0" Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.631551 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db584\" (UniqueName: \"kubernetes.io/projected/d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e-kube-api-access-db584\") pod \"kube-state-metrics-0\" (UID: \"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e\") " pod="openstack/kube-state-metrics-0" Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.652212 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db584\" (UniqueName: \"kubernetes.io/projected/d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e-kube-api-access-db584\") pod \"kube-state-metrics-0\" (UID: \"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e\") " pod="openstack/kube-state-metrics-0" Nov 29 07:36:04 crc kubenswrapper[4660]: I1129 07:36:04.751704 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:36:05 crc kubenswrapper[4660]: I1129 07:36:05.180785 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:36:05 crc kubenswrapper[4660]: W1129 07:36:05.189894 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7f07db1_9bb5_4a2d_ab6f_62d7cec3c34e.slice/crio-96ee88940dcc7ef7225b5c00ad83cc5a3a300c617bb37e4f7d57bcef91c252c8 WatchSource:0}: Error finding container 96ee88940dcc7ef7225b5c00ad83cc5a3a300c617bb37e4f7d57bcef91c252c8: Status 404 returned error can't find the container with id 96ee88940dcc7ef7225b5c00ad83cc5a3a300c617bb37e4f7d57bcef91c252c8 Nov 29 07:36:06 crc kubenswrapper[4660]: I1129 07:36:06.172360 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e","Type":"ContainerStarted","Data":"96ee88940dcc7ef7225b5c00ad83cc5a3a300c617bb37e4f7d57bcef91c252c8"} Nov 29 07:36:06 crc kubenswrapper[4660]: I1129 07:36:06.775832 4660 patch_prober.go:28] interesting pod/nmstate-webhook-5f6d4c5ccb-ds7np container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:36:06 crc kubenswrapper[4660]: I1129 07:36:06.775894 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ds7np" podUID="c3aaf1b2-a146-43cd-91ab-8ee65cff6e44" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.016683 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xdz26"] Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.017773 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.027459 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.027499 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qkq48" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.027499 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.046957 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xdz26"] Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.052272 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rdslv"] Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.053937 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103073 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75569c9-ce83-4515-894c-b067e01f3d9b-scripts\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103125 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-run\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103144 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-log-ovn\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103193 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a75569c9-ce83-4515-894c-b067e01f3d9b-ovn-controller-tls-certs\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103215 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-etc-ovs\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103235 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-lib\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103262 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75569c9-ce83-4515-894c-b067e01f3d9b-combined-ca-bundle\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103284 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-run-ovn\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103305 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkjb\" (UniqueName: \"kubernetes.io/projected/a75569c9-ce83-4515-894c-b067e01f3d9b-kube-api-access-fgkjb\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103329 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4ls\" (UniqueName: \"kubernetes.io/projected/538da925-a098-483e-a112-334d0930655e-kube-api-access-wr4ls\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103347 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-log\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103363 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/538da925-a098-483e-a112-334d0930655e-scripts\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.103388 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-run\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.119110 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rdslv"] Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205320 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a75569c9-ce83-4515-894c-b067e01f3d9b-ovn-controller-tls-certs\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205397 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-etc-ovs\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205429 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-lib\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205505 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75569c9-ce83-4515-894c-b067e01f3d9b-combined-ca-bundle\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205531 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-run-ovn\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205562 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkjb\" (UniqueName: \"kubernetes.io/projected/a75569c9-ce83-4515-894c-b067e01f3d9b-kube-api-access-fgkjb\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205603 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4ls\" (UniqueName: \"kubernetes.io/projected/538da925-a098-483e-a112-334d0930655e-kube-api-access-wr4ls\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205648 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-log\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.205669 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/538da925-a098-483e-a112-334d0930655e-scripts\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.206019 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-etc-ovs\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.209945 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-run\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.210081 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75569c9-ce83-4515-894c-b067e01f3d9b-scripts\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.210119 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-run\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.210142 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-log-ovn\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.210435 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-run-ovn\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.210556 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-log-ovn\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.210646 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-lib\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.211117 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a75569c9-ce83-4515-894c-b067e01f3d9b-var-run\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.211214 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-log\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.213220 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/538da925-a098-483e-a112-334d0930655e-scripts\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.213278 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/538da925-a098-483e-a112-334d0930655e-var-run\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.218420 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75569c9-ce83-4515-894c-b067e01f3d9b-scripts\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.228185 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a75569c9-ce83-4515-894c-b067e01f3d9b-ovn-controller-tls-certs\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.229495 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75569c9-ce83-4515-894c-b067e01f3d9b-combined-ca-bundle\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.231313 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkjb\" (UniqueName: \"kubernetes.io/projected/a75569c9-ce83-4515-894c-b067e01f3d9b-kube-api-access-fgkjb\") pod \"ovn-controller-xdz26\" (UID: \"a75569c9-ce83-4515-894c-b067e01f3d9b\") " pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.233673 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4ls\" (UniqueName: \"kubernetes.io/projected/538da925-a098-483e-a112-334d0930655e-kube-api-access-wr4ls\") pod \"ovn-controller-ovs-rdslv\" (UID: \"538da925-a098-483e-a112-334d0930655e\") " pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.340405 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26" Nov 29 07:36:09 crc kubenswrapper[4660]: I1129 07:36:09.370180 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.012760 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.014006 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.016095 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.016285 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.016387 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.017001 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x8xxv" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.023794 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.026194 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123453 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-config\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123536 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123599 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123651 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123671 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123703 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123717 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.123745 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glz24\" (UniqueName: \"kubernetes.io/projected/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-kube-api-access-glz24\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.224747 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-config\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.224831 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.224868 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.224893 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.224944 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.224972 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.224998 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.225020 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glz24\" (UniqueName: \"kubernetes.io/projected/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-kube-api-access-glz24\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.225232 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.225570 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-config\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.226368 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.226730 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.234760 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.241432 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.244012 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.245545 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glz24\" (UniqueName: \"kubernetes.io/projected/6d07487c-33de-4aa4-9878-bcdd17e2a1d9-kube-api-access-glz24\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.253593 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6d07487c-33de-4aa4-9878-bcdd17e2a1d9\") " pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:10 crc kubenswrapper[4660]: I1129 07:36:10.333825 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.682380 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.683719 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.690862 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.691075 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.691206 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lp7q7" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.702065 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.715977 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.852528 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/825a377f-a7b3-4a9c-a39c-8e3086eb554f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.852588 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.852725 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.852761 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.852810 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mwh\" (UniqueName: \"kubernetes.io/projected/825a377f-a7b3-4a9c-a39c-8e3086eb554f-kube-api-access-m5mwh\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.852873 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/825a377f-a7b3-4a9c-a39c-8e3086eb554f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.852898 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/825a377f-a7b3-4a9c-a39c-8e3086eb554f-config\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.853068 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.954299 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/825a377f-a7b3-4a9c-a39c-8e3086eb554f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.954350 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.954402 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.954474 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.954516 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5mwh\" (UniqueName: \"kubernetes.io/projected/825a377f-a7b3-4a9c-a39c-8e3086eb554f-kube-api-access-m5mwh\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.955035 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/825a377f-a7b3-4a9c-a39c-8e3086eb554f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.955740 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.955572 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/825a377f-a7b3-4a9c-a39c-8e3086eb554f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.956778 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/825a377f-a7b3-4a9c-a39c-8e3086eb554f-config\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.956845 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.958741 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/825a377f-a7b3-4a9c-a39c-8e3086eb554f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.959215 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/825a377f-a7b3-4a9c-a39c-8e3086eb554f-config\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.960501 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.961213 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.965169 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/825a377f-a7b3-4a9c-a39c-8e3086eb554f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:11 crc kubenswrapper[4660]: I1129 07:36:11.975939 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5mwh\" (UniqueName: \"kubernetes.io/projected/825a377f-a7b3-4a9c-a39c-8e3086eb554f-kube-api-access-m5mwh\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:12 crc kubenswrapper[4660]: I1129 07:36:12.007145 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"825a377f-a7b3-4a9c-a39c-8e3086eb554f\") " pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:12 crc kubenswrapper[4660]: I1129 07:36:12.027808 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:17 crc kubenswrapper[4660]: E1129 07:36:17.945241 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 29 07:36:17 crc kubenswrapper[4660]: E1129 07:36:17.946654 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffz4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0a408d44-6909-4748-9b8e-72da66b0afea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:36:17 crc kubenswrapper[4660]: E1129 07:36:17.945380 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 29 07:36:17 crc kubenswrapper[4660]: E1129 07:36:17.946955 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhc99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(0604115a-3f3a-4061-bb63-ada6ebb5d458): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:36:17 crc kubenswrapper[4660]: E1129 07:36:17.947942 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" Nov 29 07:36:17 crc kubenswrapper[4660]: E1129 07:36:17.948085 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" Nov 29 07:36:18 crc kubenswrapper[4660]: E1129 07:36:18.271343 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" Nov 29 07:36:18 crc kubenswrapper[4660]: E1129 07:36:18.271422 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.010138 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.010632 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96sbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(4a1c83c7-2cac-4b54-90c4-080b7f50cd7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.012380 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="4a1c83c7-2cac-4b54-90c4-080b7f50cd7f" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.037413 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.037683 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwnm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.038905 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910" Nov 29 07:36:37 crc kubenswrapper[4660]: I1129 07:36:37.416295 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xdz26"] Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.702344 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.702832 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5c5h7bhdchffhcfh687h7h58h577h65h645h57h58ch5dbh557h58h565h65dh7ch9fh5fch54chb8h665hf8h64bh7fh64h5dch59bh665hb7q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2cmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(46c3b1d2-02f5-4632-bf44-648754c2e83c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.704004 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="46c3b1d2-02f5-4632-bf44-648754c2e83c" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.785390 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="4a1c83c7-2cac-4b54-90c4-080b7f50cd7f" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.785691 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910" Nov 29 07:36:37 crc kubenswrapper[4660]: E1129 07:36:37.787300 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="46c3b1d2-02f5-4632-bf44-648754c2e83c" Nov 29 07:36:42 crc kubenswrapper[4660]: W1129 07:36:42.516752 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75569c9_ce83_4515_894c_b067e01f3d9b.slice/crio-7aa16a6d5b8d9758045158cca58fdb16ae5c51f89f0b35dd13daf523c5498c1a WatchSource:0}: Error finding container 7aa16a6d5b8d9758045158cca58fdb16ae5c51f89f0b35dd13daf523c5498c1a: Status 404 returned error can't find the container with id 7aa16a6d5b8d9758045158cca58fdb16ae5c51f89f0b35dd13daf523c5498c1a Nov 29 07:36:42 crc kubenswrapper[4660]: E1129 07:36:42.547116 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:0caae9d0ebac26201d0e960c7b5055167492ee8b2edf679abccbfa9d4072d7bb: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-neutron-server/blobs/sha256:0caae9d0ebac26201d0e960c7b5055167492ee8b2edf679abccbfa9d4072d7bb\": context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:36:42 crc kubenswrapper[4660]: E1129 07:36:42.547335 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk5df,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4ng99_openstack(71c939cc-c281-4adf-b76d-8b31680a500b): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:0caae9d0ebac26201d0e960c7b5055167492ee8b2edf679abccbfa9d4072d7bb: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-neutron-server/blobs/sha256:0caae9d0ebac26201d0e960c7b5055167492ee8b2edf679abccbfa9d4072d7bb\": context canceled" logger="UnhandledError" Nov 29 07:36:42 crc kubenswrapper[4660]: E1129 07:36:42.548559 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:0caae9d0ebac26201d0e960c7b5055167492ee8b2edf679abccbfa9d4072d7bb: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-neutron-server/blobs/sha256:0caae9d0ebac26201d0e960c7b5055167492ee8b2edf679abccbfa9d4072d7bb\\\": context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" podUID="71c939cc-c281-4adf-b76d-8b31680a500b" Nov 29 07:36:42 crc kubenswrapper[4660]: I1129 07:36:42.825454 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26" event={"ID":"a75569c9-ce83-4515-894c-b067e01f3d9b","Type":"ContainerStarted","Data":"7aa16a6d5b8d9758045158cca58fdb16ae5c51f89f0b35dd13daf523c5498c1a"} Nov 29 07:36:42 crc kubenswrapper[4660]: E1129 07:36:42.827223 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" podUID="71c939cc-c281-4adf-b76d-8b31680a500b" Nov 29 07:36:43 crc kubenswrapper[4660]: I1129 07:36:43.326539 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.718406 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.718586 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsc4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6t8b8_openstack(4403a2a7-6d17-44b3-9837-52f8817cd43e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.719808 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" podUID="4403a2a7-6d17-44b3-9837-52f8817cd43e" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.741867 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.742053 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz929,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-wcvwn_openstack(43057c2f-d0e6-444c-9486-1fa1b4ad03c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.743971 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" podUID="43057c2f-d0e6-444c-9486-1fa1b4ad03c8" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.816755 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.817013 4660 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.817118 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-db584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.818377 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" Nov 29 07:36:43 crc kubenswrapper[4660]: W1129 07:36:43.843055 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d07487c_33de_4aa4_9878_bcdd17e2a1d9.slice/crio-0cdce4a518a54ccac569a75c53e367ccc3aafbc81169aa82af2ce4e47de23241 WatchSource:0}: Error finding container 0cdce4a518a54ccac569a75c53e367ccc3aafbc81169aa82af2ce4e47de23241: Status 404 returned error can't find the container with id 0cdce4a518a54ccac569a75c53e367ccc3aafbc81169aa82af2ce4e47de23241 Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.845572 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" Nov 29 07:36:43 crc kubenswrapper[4660]: E1129 07:36:43.845590 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" podUID="43057c2f-d0e6-444c-9486-1fa1b4ad03c8" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.191908 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.233108 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 29 07:36:44 crc kubenswrapper[4660]: W1129 07:36:44.240387 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod825a377f_a7b3_4a9c_a39c_8e3086eb554f.slice/crio-834140af789cf1bfe04c987473951f689ac47afe54dec6c669e8dd5f9019522b WatchSource:0}: Error finding container 834140af789cf1bfe04c987473951f689ac47afe54dec6c669e8dd5f9019522b: Status 404 returned error can't find the container with id 834140af789cf1bfe04c987473951f689ac47afe54dec6c669e8dd5f9019522b Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.316166 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4403a2a7-6d17-44b3-9837-52f8817cd43e" (UID: "4403a2a7-6d17-44b3-9837-52f8817cd43e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.315091 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-dns-svc\") pod \"4403a2a7-6d17-44b3-9837-52f8817cd43e\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.317569 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-config\") pod \"4403a2a7-6d17-44b3-9837-52f8817cd43e\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.317688 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsc4s\" (UniqueName: \"kubernetes.io/projected/4403a2a7-6d17-44b3-9837-52f8817cd43e-kube-api-access-tsc4s\") pod \"4403a2a7-6d17-44b3-9837-52f8817cd43e\" (UID: \"4403a2a7-6d17-44b3-9837-52f8817cd43e\") " Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.318963 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-config" (OuterVolumeSpecName: "config") pod "4403a2a7-6d17-44b3-9837-52f8817cd43e" (UID: "4403a2a7-6d17-44b3-9837-52f8817cd43e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.319675 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.319692 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4403a2a7-6d17-44b3-9837-52f8817cd43e-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.324436 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4403a2a7-6d17-44b3-9837-52f8817cd43e-kube-api-access-tsc4s" (OuterVolumeSpecName: "kube-api-access-tsc4s") pod "4403a2a7-6d17-44b3-9837-52f8817cd43e" (UID: "4403a2a7-6d17-44b3-9837-52f8817cd43e"). InnerVolumeSpecName "kube-api-access-tsc4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.375705 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rdslv"] Nov 29 07:36:44 crc kubenswrapper[4660]: W1129 07:36:44.383099 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod538da925_a098_483e_a112_334d0930655e.slice/crio-98cbf9a3e050354f27f1f8067152620b64856c5188f9d68830dc3e43f7faa3a2 WatchSource:0}: Error finding container 98cbf9a3e050354f27f1f8067152620b64856c5188f9d68830dc3e43f7faa3a2: Status 404 returned error can't find the container with id 98cbf9a3e050354f27f1f8067152620b64856c5188f9d68830dc3e43f7faa3a2 Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.421170 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsc4s\" (UniqueName: \"kubernetes.io/projected/4403a2a7-6d17-44b3-9837-52f8817cd43e-kube-api-access-tsc4s\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.841520 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"825a377f-a7b3-4a9c-a39c-8e3086eb554f","Type":"ContainerStarted","Data":"834140af789cf1bfe04c987473951f689ac47afe54dec6c669e8dd5f9019522b"} Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.843330 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6d07487c-33de-4aa4-9878-bcdd17e2a1d9","Type":"ContainerStarted","Data":"0cdce4a518a54ccac569a75c53e367ccc3aafbc81169aa82af2ce4e47de23241"} Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.845113 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdslv" event={"ID":"538da925-a098-483e-a112-334d0930655e","Type":"ContainerStarted","Data":"98cbf9a3e050354f27f1f8067152620b64856c5188f9d68830dc3e43f7faa3a2"} Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.846744 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" event={"ID":"4403a2a7-6d17-44b3-9837-52f8817cd43e","Type":"ContainerDied","Data":"4061a82d69318258b38d775ea0b5a19d05639b27964008d253b8b3a3c3c7b8b6"} Nov 29 07:36:44 crc kubenswrapper[4660]: I1129 07:36:44.846757 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6t8b8" Nov 29 07:36:45 crc kubenswrapper[4660]: I1129 07:36:45.164572 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6t8b8"] Nov 29 07:36:45 crc kubenswrapper[4660]: I1129 07:36:45.173693 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6t8b8"] Nov 29 07:36:45 crc kubenswrapper[4660]: I1129 07:36:45.703247 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4403a2a7-6d17-44b3-9837-52f8817cd43e" path="/var/lib/kubelet/pods/4403a2a7-6d17-44b3-9837-52f8817cd43e/volumes" Nov 29 07:36:45 crc kubenswrapper[4660]: I1129 07:36:45.858982 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0604115a-3f3a-4061-bb63-ada6ebb5d458","Type":"ContainerStarted","Data":"77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe"} Nov 29 07:36:45 crc kubenswrapper[4660]: I1129 07:36:45.862846 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a408d44-6909-4748-9b8e-72da66b0afea","Type":"ContainerStarted","Data":"3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7"} Nov 29 07:36:49 crc kubenswrapper[4660]: I1129 07:36:49.889569 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdslv" event={"ID":"538da925-a098-483e-a112-334d0930655e","Type":"ContainerStarted","Data":"2fdae876bff043714743508144b57a7724691f6c9e9cc4bd359fcdc090af02c1"} Nov 29 07:36:49 crc kubenswrapper[4660]: I1129 07:36:49.891069 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26" event={"ID":"a75569c9-ce83-4515-894c-b067e01f3d9b","Type":"ContainerStarted","Data":"9fc3390919537109f283c515b7c6494e2742ae3133cd641e136815602c663422"} Nov 29 07:36:49 crc kubenswrapper[4660]: I1129 07:36:49.891181 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xdz26" Nov 29 07:36:49 crc kubenswrapper[4660]: I1129 07:36:49.892481 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"825a377f-a7b3-4a9c-a39c-8e3086eb554f","Type":"ContainerStarted","Data":"d24a5b197c26d03f7ce717cc9f4e23672c3239fe1d5bdcff5e95901ae7da5c5e"} Nov 29 07:36:49 crc kubenswrapper[4660]: I1129 07:36:49.893713 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6d07487c-33de-4aa4-9878-bcdd17e2a1d9","Type":"ContainerStarted","Data":"c183988a2e836222208950e4df9103279b9c43e218bfd365e42a431ad2d0f838"} Nov 29 07:36:49 crc kubenswrapper[4660]: I1129 07:36:49.924424 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xdz26" podStartSLOduration=37.241080271 podStartE2EDuration="41.924406675s" podCreationTimestamp="2025-11-29 07:36:08 +0000 UTC" firstStartedPulling="2025-11-29 07:36:42.523408077 +0000 UTC m=+1293.076937976" lastFinishedPulling="2025-11-29 07:36:47.206734481 +0000 UTC m=+1297.760264380" observedRunningTime="2025-11-29 07:36:49.923317976 +0000 UTC m=+1300.476847875" watchObservedRunningTime="2025-11-29 07:36:49.924406675 +0000 UTC m=+1300.477936574" Nov 29 07:36:50 crc kubenswrapper[4660]: I1129 07:36:50.902070 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"46c3b1d2-02f5-4632-bf44-648754c2e83c","Type":"ContainerStarted","Data":"406dc2b5fda7c0b208289109a7d89f7f380cc88aac7ccf70eec84317d79cc876"} Nov 29 07:36:50 crc kubenswrapper[4660]: I1129 07:36:50.902571 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 29 07:36:50 crc kubenswrapper[4660]: I1129 07:36:50.904344 4660 generic.go:334] "Generic (PLEG): container finished" podID="538da925-a098-483e-a112-334d0930655e" containerID="2fdae876bff043714743508144b57a7724691f6c9e9cc4bd359fcdc090af02c1" exitCode=0 Nov 29 07:36:50 crc kubenswrapper[4660]: I1129 07:36:50.905352 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdslv" event={"ID":"538da925-a098-483e-a112-334d0930655e","Type":"ContainerDied","Data":"2fdae876bff043714743508144b57a7724691f6c9e9cc4bd359fcdc090af02c1"} Nov 29 07:36:50 crc kubenswrapper[4660]: I1129 07:36:50.920539 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.93043175 podStartE2EDuration="48.920515417s" podCreationTimestamp="2025-11-29 07:36:02 +0000 UTC" firstStartedPulling="2025-11-29 07:36:03.17101649 +0000 UTC m=+1253.724546389" lastFinishedPulling="2025-11-29 07:36:50.161100147 +0000 UTC m=+1300.714630056" observedRunningTime="2025-11-29 07:36:50.919828157 +0000 UTC m=+1301.473358056" watchObservedRunningTime="2025-11-29 07:36:50.920515417 +0000 UTC m=+1301.474045316" Nov 29 07:36:51 crc kubenswrapper[4660]: I1129 07:36:51.912742 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdslv" event={"ID":"538da925-a098-483e-a112-334d0930655e","Type":"ContainerStarted","Data":"87ca1d3ddaeeeaa9387bdaa2356212321f341b87cd96268de170786f4cc7ffc0"} Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.930337 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"825a377f-a7b3-4a9c-a39c-8e3086eb554f","Type":"ContainerStarted","Data":"3b1079425e596abcf5738b96585b68ccce09c102e2868fc3df94591a3cbb5cba"} Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.935360 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6d07487c-33de-4aa4-9878-bcdd17e2a1d9","Type":"ContainerStarted","Data":"180f21950bdf88d01bc4415dd1e5d9cd95e0c1cd96c957d904855dbcb38e7597"} Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.938414 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdslv" event={"ID":"538da925-a098-483e-a112-334d0930655e","Type":"ContainerStarted","Data":"830b7b2d686203b9b20efac79590fb21388779f93c73230338cabe2397f66b32"} Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.938542 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.938557 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.952541 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=35.413651463 podStartE2EDuration="43.952522735s" podCreationTimestamp="2025-11-29 07:36:10 +0000 UTC" firstStartedPulling="2025-11-29 07:36:44.24245584 +0000 UTC m=+1294.795985739" lastFinishedPulling="2025-11-29 07:36:52.781327112 +0000 UTC m=+1303.334857011" observedRunningTime="2025-11-29 07:36:53.948451363 +0000 UTC m=+1304.501981262" watchObservedRunningTime="2025-11-29 07:36:53.952522735 +0000 UTC m=+1304.506052634" Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.990376 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rdslv" podStartSLOduration=42.171362369 podStartE2EDuration="44.990338339s" podCreationTimestamp="2025-11-29 07:36:09 +0000 UTC" firstStartedPulling="2025-11-29 07:36:44.386308191 +0000 UTC m=+1294.939838090" lastFinishedPulling="2025-11-29 07:36:47.205284161 +0000 UTC m=+1297.758814060" observedRunningTime="2025-11-29 07:36:53.985126166 +0000 UTC m=+1304.538656065" watchObservedRunningTime="2025-11-29 07:36:53.990338339 +0000 UTC m=+1304.543868248" Nov 29 07:36:53 crc kubenswrapper[4660]: I1129 07:36:53.990498 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=37.124322607 podStartE2EDuration="45.990492073s" podCreationTimestamp="2025-11-29 07:36:08 +0000 UTC" firstStartedPulling="2025-11-29 07:36:43.870085433 +0000 UTC m=+1294.423615332" lastFinishedPulling="2025-11-29 07:36:52.736254899 +0000 UTC m=+1303.289784798" observedRunningTime="2025-11-29 07:36:53.968588749 +0000 UTC m=+1304.522118648" watchObservedRunningTime="2025-11-29 07:36:53.990492073 +0000 UTC m=+1304.544021982" Nov 29 07:36:54 crc kubenswrapper[4660]: I1129 07:36:54.028817 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:54 crc kubenswrapper[4660]: I1129 07:36:54.286675 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:54 crc kubenswrapper[4660]: I1129 07:36:54.944975 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:54 crc kubenswrapper[4660]: I1129 07:36:54.980286 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.254044 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4ng99"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.291443 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bkldr"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.293141 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.309445 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bkldr"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.309856 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.334680 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.334725 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.393469 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.429381 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzf66\" (UniqueName: \"kubernetes.io/projected/dc692c38-a11e-470c-98e4-d6df1a7b2aff-kube-api-access-wzf66\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.429485 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-config\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.429526 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.429553 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.531125 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-config\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.531207 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.531232 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.531290 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzf66\" (UniqueName: \"kubernetes.io/projected/dc692c38-a11e-470c-98e4-d6df1a7b2aff-kube-api-access-wzf66\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.531998 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-config\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.532300 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.532505 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.552640 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wkgc6"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.554158 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.558237 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.582017 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wkgc6"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.589470 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzf66\" (UniqueName: \"kubernetes.io/projected/dc692c38-a11e-470c-98e4-d6df1a7b2aff-kube-api-access-wzf66\") pod \"dnsmasq-dns-5bf47b49b7-bkldr\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.621342 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.733985 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-ovn-rundir\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.734359 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-config\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.734391 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.734444 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wtv\" (UniqueName: \"kubernetes.io/projected/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-kube-api-access-m5wtv\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.734718 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-ovs-rundir\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.734772 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-combined-ca-bundle\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.773091 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcvwn"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.811659 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vzsmd"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.817017 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.820137 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.836875 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-ovs-rundir\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.837236 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-combined-ca-bundle\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.837379 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-ovn-rundir\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.837799 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-config\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.837960 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.838098 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5wtv\" (UniqueName: \"kubernetes.io/projected/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-kube-api-access-m5wtv\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.838823 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-ovs-rundir\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.839079 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vzsmd"] Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.840734 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-ovn-rundir\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.842361 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-config\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.857578 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-combined-ca-bundle\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.861313 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.899976 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5wtv\" (UniqueName: \"kubernetes.io/projected/3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81-kube-api-access-m5wtv\") pod \"ovn-controller-metrics-wkgc6\" (UID: \"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81\") " pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.945396 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-config\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.945495 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgbvd\" (UniqueName: \"kubernetes.io/projected/d51cb0ce-5bfb-4755-a879-c83e3f552f55-kube-api-access-dgbvd\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.945519 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.945536 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-dns-svc\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.945601 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.971713 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" event={"ID":"71c939cc-c281-4adf-b76d-8b31680a500b","Type":"ContainerDied","Data":"abf055bc1a645cb96f0c7e9a9883bb6650f1c76e11e349b10ff69d974c67d009"} Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.971768 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf055bc1a645cb96f0c7e9a9883bb6650f1c76e11e349b10ff69d974c67d009" Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.977354 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910","Type":"ContainerStarted","Data":"c153873a6eaac69f67d86b344d2cbbc4b04ba232ffab25f97283543a99bda844"} Nov 29 07:36:55 crc kubenswrapper[4660]: I1129 07:36:55.999700 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.047881 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.047941 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-config\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.047997 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgbvd\" (UniqueName: \"kubernetes.io/projected/d51cb0ce-5bfb-4755-a879-c83e3f552f55-kube-api-access-dgbvd\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.048017 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.048031 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-dns-svc\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.048777 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-dns-svc\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.049383 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.054150 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-config\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.054306 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.066125 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.070024 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgbvd\" (UniqueName: \"kubernetes.io/projected/d51cb0ce-5bfb-4755-a879-c83e3f552f55-kube-api-access-dgbvd\") pod \"dnsmasq-dns-8554648995-vzsmd\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.148824 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-config\") pod \"71c939cc-c281-4adf-b76d-8b31680a500b\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.149264 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk5df\" (UniqueName: \"kubernetes.io/projected/71c939cc-c281-4adf-b76d-8b31680a500b-kube-api-access-rk5df\") pod \"71c939cc-c281-4adf-b76d-8b31680a500b\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.149325 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-dns-svc\") pod \"71c939cc-c281-4adf-b76d-8b31680a500b\" (UID: \"71c939cc-c281-4adf-b76d-8b31680a500b\") " Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.150137 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-config" (OuterVolumeSpecName: "config") pod "71c939cc-c281-4adf-b76d-8b31680a500b" (UID: "71c939cc-c281-4adf-b76d-8b31680a500b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.153204 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.156183 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71c939cc-c281-4adf-b76d-8b31680a500b" (UID: "71c939cc-c281-4adf-b76d-8b31680a500b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.158764 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c939cc-c281-4adf-b76d-8b31680a500b-kube-api-access-rk5df" (OuterVolumeSpecName: "kube-api-access-rk5df") pod "71c939cc-c281-4adf-b76d-8b31680a500b" (UID: "71c939cc-c281-4adf-b76d-8b31680a500b"). InnerVolumeSpecName "kube-api-access-rk5df". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.172478 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wkgc6" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.251991 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk5df\" (UniqueName: \"kubernetes.io/projected/71c939cc-c281-4adf-b76d-8b31680a500b-kube-api-access-rk5df\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.252041 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.252056 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c939cc-c281-4adf-b76d-8b31680a500b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.377219 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.378710 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.389841 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.390109 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.390285 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-v5fm5" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.390469 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.408642 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.455008 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.455065 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqst\" (UniqueName: \"kubernetes.io/projected/a17c15c7-a4af-4447-b315-8558385d4449-kube-api-access-ncqst\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.455102 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.455159 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a17c15c7-a4af-4447-b315-8558385d4449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.455194 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17c15c7-a4af-4447-b315-8558385d4449-config\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.455290 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a17c15c7-a4af-4447-b315-8558385d4449-scripts\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.455315 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.557214 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a17c15c7-a4af-4447-b315-8558385d4449-scripts\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.557254 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.557300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.557330 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqst\" (UniqueName: \"kubernetes.io/projected/a17c15c7-a4af-4447-b315-8558385d4449-kube-api-access-ncqst\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.557356 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.557394 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a17c15c7-a4af-4447-b315-8558385d4449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.557421 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17c15c7-a4af-4447-b315-8558385d4449-config\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.558256 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17c15c7-a4af-4447-b315-8558385d4449-config\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.559366 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a17c15c7-a4af-4447-b315-8558385d4449-scripts\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.559421 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a17c15c7-a4af-4447-b315-8558385d4449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.571466 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.572351 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.573470 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a17c15c7-a4af-4447-b315-8558385d4449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.579447 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqst\" (UniqueName: \"kubernetes.io/projected/a17c15c7-a4af-4447-b315-8558385d4449-kube-api-access-ncqst\") pod \"ovn-northd-0\" (UID: \"a17c15c7-a4af-4447-b315-8558385d4449\") " pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.613622 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bkldr"] Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.730299 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.828493 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vzsmd"] Nov 29 07:36:56 crc kubenswrapper[4660]: I1129 07:36:56.965553 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wkgc6"] Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.006250 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vzsmd" event={"ID":"d51cb0ce-5bfb-4755-a879-c83e3f552f55","Type":"ContainerStarted","Data":"853f9bcef3cce0a96239f04883a8ed7e160cddc19e02d153d538b79a001f4cc6"} Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.009065 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" event={"ID":"dc692c38-a11e-470c-98e4-d6df1a7b2aff","Type":"ContainerStarted","Data":"6f7ade871f151b9d131d2298e02d2cc98824e0eacd14834c97def4771353c459"} Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.009274 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" event={"ID":"dc692c38-a11e-470c-98e4-d6df1a7b2aff","Type":"ContainerStarted","Data":"9ea6bea342eceb9e86c2f58809888791e24f2fee9e95ea3285b01cb11fc8ce50"} Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.017847 4660 generic.go:334] "Generic (PLEG): container finished" podID="43057c2f-d0e6-444c-9486-1fa1b4ad03c8" containerID="55c552367a7da6033d0dbb920ad43f0a08c536e3ebd1c99e5eb3f552679452b9" exitCode=0 Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.017951 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" event={"ID":"43057c2f-d0e6-444c-9486-1fa1b4ad03c8","Type":"ContainerDied","Data":"55c552367a7da6033d0dbb920ad43f0a08c536e3ebd1c99e5eb3f552679452b9"} Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.022114 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f","Type":"ContainerStarted","Data":"5b200f9ef51ccd92c5bf10cc681f95ae05e6fe699567378476bc2636bdb9d66b"} Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.031602 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4ng99" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.033909 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wkgc6" event={"ID":"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81","Type":"ContainerStarted","Data":"eb8793de4ee49fb4b1b7fafed65a3b3f94c2f0764c95d10bc9162502e3f690f2"} Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.072062 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.160472 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4ng99"] Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.169221 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4ng99"] Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.400391 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.580053 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-config\") pod \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.580398 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz929\" (UniqueName: \"kubernetes.io/projected/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-kube-api-access-bz929\") pod \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.580562 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-dns-svc\") pod \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\" (UID: \"43057c2f-d0e6-444c-9486-1fa1b4ad03c8\") " Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.585431 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-kube-api-access-bz929" (OuterVolumeSpecName: "kube-api-access-bz929") pod "43057c2f-d0e6-444c-9486-1fa1b4ad03c8" (UID: "43057c2f-d0e6-444c-9486-1fa1b4ad03c8"). InnerVolumeSpecName "kube-api-access-bz929". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.600315 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43057c2f-d0e6-444c-9486-1fa1b4ad03c8" (UID: "43057c2f-d0e6-444c-9486-1fa1b4ad03c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.606251 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-config" (OuterVolumeSpecName: "config") pod "43057c2f-d0e6-444c-9486-1fa1b4ad03c8" (UID: "43057c2f-d0e6-444c-9486-1fa1b4ad03c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.681975 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.682014 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.682028 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz929\" (UniqueName: \"kubernetes.io/projected/43057c2f-d0e6-444c-9486-1fa1b4ad03c8-kube-api-access-bz929\") on node \"crc\" DevicePath \"\"" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.703139 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c939cc-c281-4adf-b76d-8b31680a500b" path="/var/lib/kubelet/pods/71c939cc-c281-4adf-b76d-8b31680a500b/volumes" Nov 29 07:36:57 crc kubenswrapper[4660]: I1129 07:36:57.703590 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.041098 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wkgc6" event={"ID":"3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81","Type":"ContainerStarted","Data":"d56243fb1660b1bd9993961148b5f7de86c166d5036d0bf12fff01a13112fe26"} Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.050810 4660 generic.go:334] "Generic (PLEG): container finished" podID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerID="330551ae07cec752b703b8b7ea36757a273c38201a550390609405d1e0802997" exitCode=0 Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.050882 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vzsmd" event={"ID":"d51cb0ce-5bfb-4755-a879-c83e3f552f55","Type":"ContainerDied","Data":"330551ae07cec752b703b8b7ea36757a273c38201a550390609405d1e0802997"} Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.053929 4660 generic.go:334] "Generic (PLEG): container finished" podID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerID="6f7ade871f151b9d131d2298e02d2cc98824e0eacd14834c97def4771353c459" exitCode=0 Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.053981 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" event={"ID":"dc692c38-a11e-470c-98e4-d6df1a7b2aff","Type":"ContainerDied","Data":"6f7ade871f151b9d131d2298e02d2cc98824e0eacd14834c97def4771353c459"} Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.054002 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" event={"ID":"dc692c38-a11e-470c-98e4-d6df1a7b2aff","Type":"ContainerStarted","Data":"ce62e3d6121fc50528526c85a5ebd03882ecfc1fc5ca5d0328bac7514af17981"} Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.054548 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.094665 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wkgc6" podStartSLOduration=3.094648023 podStartE2EDuration="3.094648023s" podCreationTimestamp="2025-11-29 07:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:36:58.0644801 +0000 UTC m=+1308.618010009" watchObservedRunningTime="2025-11-29 07:36:58.094648023 +0000 UTC m=+1308.648177922" Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.129696 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a17c15c7-a4af-4447-b315-8558385d4449","Type":"ContainerStarted","Data":"2c22f2dbd9b9115e359cea7d50dc75b5682c98c9bc9dee10451839c88db89eb3"} Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.134545 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.134924 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcvwn" event={"ID":"43057c2f-d0e6-444c-9486-1fa1b4ad03c8","Type":"ContainerDied","Data":"d15f836334416a94254e574798184f967b7388d1a8d4cbc3c4ba941fe2b228a1"} Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.134948 4660 scope.go:117] "RemoveContainer" containerID="55c552367a7da6033d0dbb920ad43f0a08c536e3ebd1c99e5eb3f552679452b9" Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.165469 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" podStartSLOduration=3.165451696 podStartE2EDuration="3.165451696s" podCreationTimestamp="2025-11-29 07:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:36:58.157190808 +0000 UTC m=+1308.710720707" watchObservedRunningTime="2025-11-29 07:36:58.165451696 +0000 UTC m=+1308.718981585" Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.321720 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcvwn"] Nov 29 07:36:58 crc kubenswrapper[4660]: I1129 07:36:58.332794 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcvwn"] Nov 29 07:36:59 crc kubenswrapper[4660]: I1129 07:36:59.713810 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43057c2f-d0e6-444c-9486-1fa1b4ad03c8" path="/var/lib/kubelet/pods/43057c2f-d0e6-444c-9486-1fa1b4ad03c8/volumes" Nov 29 07:37:00 crc kubenswrapper[4660]: I1129 07:37:00.173372 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a17c15c7-a4af-4447-b315-8558385d4449","Type":"ContainerStarted","Data":"b842396bb8f7ca06cfa7c81f682b882a402ccde277791299b3dd0db73efb125b"} Nov 29 07:37:00 crc kubenswrapper[4660]: I1129 07:37:00.180800 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vzsmd" event={"ID":"d51cb0ce-5bfb-4755-a879-c83e3f552f55","Type":"ContainerStarted","Data":"ed98c9c32f393d428dd174dec0e7f42ae7aa20254686eb48db8d25a16fa58b98"} Nov 29 07:37:00 crc kubenswrapper[4660]: I1129 07:37:00.181154 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:37:01 crc kubenswrapper[4660]: I1129 07:37:01.191371 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a17c15c7-a4af-4447-b315-8558385d4449","Type":"ContainerStarted","Data":"64e9098b61f3c7ecc19b623832a2963d3eb8b241bee91098a38058fafea6a417"} Nov 29 07:37:01 crc kubenswrapper[4660]: I1129 07:37:01.191739 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 29 07:37:01 crc kubenswrapper[4660]: I1129 07:37:01.211585 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.569624401 podStartE2EDuration="5.211571306s" podCreationTimestamp="2025-11-29 07:36:56 +0000 UTC" firstStartedPulling="2025-11-29 07:36:57.159813392 +0000 UTC m=+1307.713343291" lastFinishedPulling="2025-11-29 07:36:59.801760287 +0000 UTC m=+1310.355290196" observedRunningTime="2025-11-29 07:37:01.209083317 +0000 UTC m=+1311.762613216" watchObservedRunningTime="2025-11-29 07:37:01.211571306 +0000 UTC m=+1311.765101195" Nov 29 07:37:01 crc kubenswrapper[4660]: I1129 07:37:01.212396 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vzsmd" podStartSLOduration=6.212390918 podStartE2EDuration="6.212390918s" podCreationTimestamp="2025-11-29 07:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:00.202847076 +0000 UTC m=+1310.756376995" watchObservedRunningTime="2025-11-29 07:37:01.212390918 +0000 UTC m=+1311.765920817" Nov 29 07:37:02 crc kubenswrapper[4660]: I1129 07:37:02.199731 4660 generic.go:334] "Generic (PLEG): container finished" podID="eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910" containerID="c153873a6eaac69f67d86b344d2cbbc4b04ba232ffab25f97283543a99bda844" exitCode=0 Nov 29 07:37:02 crc kubenswrapper[4660]: I1129 07:37:02.199797 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910","Type":"ContainerDied","Data":"c153873a6eaac69f67d86b344d2cbbc4b04ba232ffab25f97283543a99bda844"} Nov 29 07:37:02 crc kubenswrapper[4660]: I1129 07:37:02.203023 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e","Type":"ContainerStarted","Data":"c9072f8214b449617015915bc0e53754d53d2811c556a0ad8a2de25d921f9148"} Nov 29 07:37:02 crc kubenswrapper[4660]: I1129 07:37:02.203562 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 07:37:02 crc kubenswrapper[4660]: I1129 07:37:02.248571 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.030777275 podStartE2EDuration="58.248552635s" podCreationTimestamp="2025-11-29 07:36:04 +0000 UTC" firstStartedPulling="2025-11-29 07:36:05.195318268 +0000 UTC m=+1255.748848167" lastFinishedPulling="2025-11-29 07:37:01.413093608 +0000 UTC m=+1311.966623527" observedRunningTime="2025-11-29 07:37:02.245663225 +0000 UTC m=+1312.799193144" watchObservedRunningTime="2025-11-29 07:37:02.248552635 +0000 UTC m=+1312.802082534" Nov 29 07:37:03 crc kubenswrapper[4660]: I1129 07:37:03.211768 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910","Type":"ContainerStarted","Data":"7a2ac9a8b3914c42833669e4136515b0a2be119de82e2fffe684684c535065e4"} Nov 29 07:37:03 crc kubenswrapper[4660]: I1129 07:37:03.217757 4660 generic.go:334] "Generic (PLEG): container finished" podID="4a1c83c7-2cac-4b54-90c4-080b7f50cd7f" containerID="5b200f9ef51ccd92c5bf10cc681f95ae05e6fe699567378476bc2636bdb9d66b" exitCode=0 Nov 29 07:37:03 crc kubenswrapper[4660]: I1129 07:37:03.217833 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f","Type":"ContainerDied","Data":"5b200f9ef51ccd92c5bf10cc681f95ae05e6fe699567378476bc2636bdb9d66b"} Nov 29 07:37:03 crc kubenswrapper[4660]: I1129 07:37:03.236691 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.088841701 podStartE2EDuration="1m4.236671816s" podCreationTimestamp="2025-11-29 07:35:59 +0000 UTC" firstStartedPulling="2025-11-29 07:36:01.659763981 +0000 UTC m=+1252.213293880" lastFinishedPulling="2025-11-29 07:36:53.807594076 +0000 UTC m=+1304.361123995" observedRunningTime="2025-11-29 07:37:03.232731387 +0000 UTC m=+1313.786261286" watchObservedRunningTime="2025-11-29 07:37:03.236671816 +0000 UTC m=+1313.790201715" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.226677 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4a1c83c7-2cac-4b54-90c4-080b7f50cd7f","Type":"ContainerStarted","Data":"731d13486d59e5b8893d8612a7fb6cd119841e24b5a796d8210f03129ef7aa4a"} Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.734339 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.605105439999999 podStartE2EDuration="1m4.734322749s" podCreationTimestamp="2025-11-29 07:36:00 +0000 UTC" firstStartedPulling="2025-11-29 07:36:02.866440094 +0000 UTC m=+1253.419969993" lastFinishedPulling="2025-11-29 07:36:55.995657403 +0000 UTC m=+1306.549187302" observedRunningTime="2025-11-29 07:37:04.25387217 +0000 UTC m=+1314.807402069" watchObservedRunningTime="2025-11-29 07:37:04.734322749 +0000 UTC m=+1315.287852648" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.737406 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bkldr"] Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.737644 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" podUID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerName="dnsmasq-dns" containerID="cri-o://ce62e3d6121fc50528526c85a5ebd03882ecfc1fc5ca5d0328bac7514af17981" gracePeriod=10 Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.738747 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.779678 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-k95b5"] Nov 29 07:37:04 crc kubenswrapper[4660]: E1129 07:37:04.780070 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43057c2f-d0e6-444c-9486-1fa1b4ad03c8" containerName="init" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.780090 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="43057c2f-d0e6-444c-9486-1fa1b4ad03c8" containerName="init" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.780264 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="43057c2f-d0e6-444c-9486-1fa1b4ad03c8" containerName="init" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.781337 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.813649 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-k95b5"] Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.978907 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.978964 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr9s\" (UniqueName: \"kubernetes.io/projected/e1d28983-0802-4233-b388-506681c95edd-kube-api-access-plr9s\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.979041 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.979107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:04 crc kubenswrapper[4660]: I1129 07:37:04.979140 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-config\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.080252 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.080585 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-config\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.080658 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.080687 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr9s\" (UniqueName: \"kubernetes.io/projected/e1d28983-0802-4233-b388-506681c95edd-kube-api-access-plr9s\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.080764 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.081179 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.081724 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.081790 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-config\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.082031 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.109650 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr9s\" (UniqueName: \"kubernetes.io/projected/e1d28983-0802-4233-b388-506681c95edd-kube-api-access-plr9s\") pod \"dnsmasq-dns-b8fbc5445-k95b5\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.183455 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.245100 4660 generic.go:334] "Generic (PLEG): container finished" podID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerID="ce62e3d6121fc50528526c85a5ebd03882ecfc1fc5ca5d0328bac7514af17981" exitCode=0 Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.245141 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" event={"ID":"dc692c38-a11e-470c-98e4-d6df1a7b2aff","Type":"ContainerDied","Data":"ce62e3d6121fc50528526c85a5ebd03882ecfc1fc5ca5d0328bac7514af17981"} Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.319774 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.488859 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-config\") pod \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.488958 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzf66\" (UniqueName: \"kubernetes.io/projected/dc692c38-a11e-470c-98e4-d6df1a7b2aff-kube-api-access-wzf66\") pod \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.489060 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-ovsdbserver-nb\") pod \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.489125 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-dns-svc\") pod \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\" (UID: \"dc692c38-a11e-470c-98e4-d6df1a7b2aff\") " Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.495790 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc692c38-a11e-470c-98e4-d6df1a7b2aff-kube-api-access-wzf66" (OuterVolumeSpecName: "kube-api-access-wzf66") pod "dc692c38-a11e-470c-98e4-d6df1a7b2aff" (UID: "dc692c38-a11e-470c-98e4-d6df1a7b2aff"). InnerVolumeSpecName "kube-api-access-wzf66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.525267 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-config" (OuterVolumeSpecName: "config") pod "dc692c38-a11e-470c-98e4-d6df1a7b2aff" (UID: "dc692c38-a11e-470c-98e4-d6df1a7b2aff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.533501 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc692c38-a11e-470c-98e4-d6df1a7b2aff" (UID: "dc692c38-a11e-470c-98e4-d6df1a7b2aff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.536971 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc692c38-a11e-470c-98e4-d6df1a7b2aff" (UID: "dc692c38-a11e-470c-98e4-d6df1a7b2aff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.590509 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.590539 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.590548 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc692c38-a11e-470c-98e4-d6df1a7b2aff-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.590557 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzf66\" (UniqueName: \"kubernetes.io/projected/dc692c38-a11e-470c-98e4-d6df1a7b2aff-kube-api-access-wzf66\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:05 crc kubenswrapper[4660]: W1129 07:37:05.681199 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1d28983_0802_4233_b388_506681c95edd.slice/crio-741e1242a2d1ad13dc5b7e5b20e26c8bb211b36e140675a52a54eea1b9613372 WatchSource:0}: Error finding container 741e1242a2d1ad13dc5b7e5b20e26c8bb211b36e140675a52a54eea1b9613372: Status 404 returned error can't find the container with id 741e1242a2d1ad13dc5b7e5b20e26c8bb211b36e140675a52a54eea1b9613372 Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.687973 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-k95b5"] Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.965079 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:37:05 crc kubenswrapper[4660]: E1129 07:37:05.965672 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerName="init" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.965693 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerName="init" Nov 29 07:37:05 crc kubenswrapper[4660]: E1129 07:37:05.965700 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerName="dnsmasq-dns" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.965706 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerName="dnsmasq-dns" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.965878 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" containerName="dnsmasq-dns" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.970555 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.973080 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.973241 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.973327 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4h5pl" Nov 29 07:37:05 crc kubenswrapper[4660]: I1129 07:37:05.973662 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:05.997765 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.103594 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1875d22e-2809-4d96-9cb9-bac77320c5a3-cache\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.103673 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.103698 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.103802 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkqj\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-kube-api-access-6nkqj\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.104630 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1875d22e-2809-4d96-9cb9-bac77320c5a3-lock\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.154771 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.206595 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1875d22e-2809-4d96-9cb9-bac77320c5a3-lock\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.206708 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1875d22e-2809-4d96-9cb9-bac77320c5a3-cache\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.206743 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.206777 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.206803 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkqj\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-kube-api-access-6nkqj\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.207700 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1875d22e-2809-4d96-9cb9-bac77320c5a3-lock\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.207985 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1875d22e-2809-4d96-9cb9-bac77320c5a3-cache\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: E1129 07:37:06.208124 4660 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:37:06 crc kubenswrapper[4660]: E1129 07:37:06.208147 4660 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:37:06 crc kubenswrapper[4660]: E1129 07:37:06.208192 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift podName:1875d22e-2809-4d96-9cb9-bac77320c5a3 nodeName:}" failed. No retries permitted until 2025-11-29 07:37:06.708175085 +0000 UTC m=+1317.261704984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift") pod "swift-storage-0" (UID: "1875d22e-2809-4d96-9cb9-bac77320c5a3") : configmap "swift-ring-files" not found Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.208480 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.249571 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkqj\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-kube-api-access-6nkqj\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.258545 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.266802 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.266857 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bkldr" event={"ID":"dc692c38-a11e-470c-98e4-d6df1a7b2aff","Type":"ContainerDied","Data":"9ea6bea342eceb9e86c2f58809888791e24f2fee9e95ea3285b01cb11fc8ce50"} Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.266940 4660 scope.go:117] "RemoveContainer" containerID="ce62e3d6121fc50528526c85a5ebd03882ecfc1fc5ca5d0328bac7514af17981" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.271499 4660 generic.go:334] "Generic (PLEG): container finished" podID="e1d28983-0802-4233-b388-506681c95edd" containerID="2b9ba5d3215ead505dbacac3ad7d78532c14eaf355d36879c47cc0059ca0d587" exitCode=0 Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.271547 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" event={"ID":"e1d28983-0802-4233-b388-506681c95edd","Type":"ContainerDied","Data":"2b9ba5d3215ead505dbacac3ad7d78532c14eaf355d36879c47cc0059ca0d587"} Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.271576 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" event={"ID":"e1d28983-0802-4233-b388-506681c95edd","Type":"ContainerStarted","Data":"741e1242a2d1ad13dc5b7e5b20e26c8bb211b36e140675a52a54eea1b9613372"} Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.372380 4660 scope.go:117] "RemoveContainer" containerID="6f7ade871f151b9d131d2298e02d2cc98824e0eacd14834c97def4771353c459" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.396631 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bkldr"] Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.402266 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bkldr"] Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.486824 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9gk46"] Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.488246 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.491459 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.491467 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.491880 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.507194 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9gk46"] Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.517566 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-9gk46"] Nov 29 07:37:06 crc kubenswrapper[4660]: E1129 07:37:06.518352 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-ts8cg ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-ts8cg ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-9gk46" podUID="5d86c28e-1a7a-4dad-8792-01bfd513f571" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.571497 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5xg97"] Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.572729 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.588016 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5xg97"] Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.617188 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d86c28e-1a7a-4dad-8792-01bfd513f571-etc-swift\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.617243 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-ring-data-devices\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.617312 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-scripts\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.617331 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-dispersionconf\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.617358 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-swiftconf\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.617380 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-combined-ca-bundle\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.617397 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts8cg\" (UniqueName: \"kubernetes.io/projected/5d86c28e-1a7a-4dad-8792-01bfd513f571-kube-api-access-ts8cg\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.718810 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-combined-ca-bundle\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.718864 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-ring-data-devices\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.718894 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts8cg\" (UniqueName: \"kubernetes.io/projected/5d86c28e-1a7a-4dad-8792-01bfd513f571-kube-api-access-ts8cg\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.718979 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-swiftconf\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719063 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-combined-ca-bundle\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719090 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d86c28e-1a7a-4dad-8792-01bfd513f571-etc-swift\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719115 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tqb\" (UniqueName: \"kubernetes.io/projected/d487e762-0eca-4f42-aae2-1b8674868db1-kube-api-access-h5tqb\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719147 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-ring-data-devices\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719179 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d487e762-0eca-4f42-aae2-1b8674868db1-etc-swift\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719231 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-scripts\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719269 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-dispersionconf\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719302 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719324 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-scripts\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719351 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-dispersionconf\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719388 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-swiftconf\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: E1129 07:37:06.719392 4660 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:37:06 crc kubenswrapper[4660]: E1129 07:37:06.719417 4660 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:37:06 crc kubenswrapper[4660]: E1129 07:37:06.719465 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift podName:1875d22e-2809-4d96-9cb9-bac77320c5a3 nodeName:}" failed. No retries permitted until 2025-11-29 07:37:07.719446997 +0000 UTC m=+1318.272976976 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift") pod "swift-storage-0" (UID: "1875d22e-2809-4d96-9cb9-bac77320c5a3") : configmap "swift-ring-files" not found Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.719500 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d86c28e-1a7a-4dad-8792-01bfd513f571-etc-swift\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.720097 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-ring-data-devices\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.720994 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-scripts\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.723189 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-dispersionconf\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.724699 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-combined-ca-bundle\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.726429 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-swiftconf\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.740864 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts8cg\" (UniqueName: \"kubernetes.io/projected/5d86c28e-1a7a-4dad-8792-01bfd513f571-kube-api-access-ts8cg\") pod \"swift-ring-rebalance-9gk46\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.820699 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-dispersionconf\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.820852 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-ring-data-devices\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.820929 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-swiftconf\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.820985 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-combined-ca-bundle\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.821009 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tqb\" (UniqueName: \"kubernetes.io/projected/d487e762-0eca-4f42-aae2-1b8674868db1-kube-api-access-h5tqb\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.821039 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d487e762-0eca-4f42-aae2-1b8674868db1-etc-swift\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.821097 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-scripts\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.822022 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d487e762-0eca-4f42-aae2-1b8674868db1-etc-swift\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.822099 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-scripts\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.822540 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-ring-data-devices\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.823952 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-dispersionconf\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.824584 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-swiftconf\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.825879 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-combined-ca-bundle\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.840470 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tqb\" (UniqueName: \"kubernetes.io/projected/d487e762-0eca-4f42-aae2-1b8674868db1-kube-api-access-h5tqb\") pod \"swift-ring-rebalance-5xg97\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:06 crc kubenswrapper[4660]: I1129 07:37:06.889301 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.290740 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" event={"ID":"e1d28983-0802-4233-b388-506681c95edd","Type":"ContainerStarted","Data":"12e4b3443a159c7a4d826e7b277a52f684976e6d52cf0aad9be435eebb152806"} Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.291621 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.291759 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.318874 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.326798 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" podStartSLOduration=3.326781968 podStartE2EDuration="3.326781968s" podCreationTimestamp="2025-11-29 07:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:07.319305512 +0000 UTC m=+1317.872835411" watchObservedRunningTime="2025-11-29 07:37:07.326781968 +0000 UTC m=+1317.880311867" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.432697 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts8cg\" (UniqueName: \"kubernetes.io/projected/5d86c28e-1a7a-4dad-8792-01bfd513f571-kube-api-access-ts8cg\") pod \"5d86c28e-1a7a-4dad-8792-01bfd513f571\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.433048 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d86c28e-1a7a-4dad-8792-01bfd513f571-etc-swift\") pod \"5d86c28e-1a7a-4dad-8792-01bfd513f571\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.433167 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-ring-data-devices\") pod \"5d86c28e-1a7a-4dad-8792-01bfd513f571\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.433316 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d86c28e-1a7a-4dad-8792-01bfd513f571-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5d86c28e-1a7a-4dad-8792-01bfd513f571" (UID: "5d86c28e-1a7a-4dad-8792-01bfd513f571"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.433428 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-swiftconf\") pod \"5d86c28e-1a7a-4dad-8792-01bfd513f571\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.433536 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-scripts\") pod \"5d86c28e-1a7a-4dad-8792-01bfd513f571\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.433775 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5d86c28e-1a7a-4dad-8792-01bfd513f571" (UID: "5d86c28e-1a7a-4dad-8792-01bfd513f571"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.433965 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-scripts" (OuterVolumeSpecName: "scripts") pod "5d86c28e-1a7a-4dad-8792-01bfd513f571" (UID: "5d86c28e-1a7a-4dad-8792-01bfd513f571"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.435229 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-combined-ca-bundle\") pod \"5d86c28e-1a7a-4dad-8792-01bfd513f571\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.435426 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-dispersionconf\") pod \"5d86c28e-1a7a-4dad-8792-01bfd513f571\" (UID: \"5d86c28e-1a7a-4dad-8792-01bfd513f571\") " Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.436215 4660 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d86c28e-1a7a-4dad-8792-01bfd513f571-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.436318 4660 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.436395 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d86c28e-1a7a-4dad-8792-01bfd513f571-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.439618 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5d86c28e-1a7a-4dad-8792-01bfd513f571" (UID: "5d86c28e-1a7a-4dad-8792-01bfd513f571"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.439760 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d86c28e-1a7a-4dad-8792-01bfd513f571-kube-api-access-ts8cg" (OuterVolumeSpecName: "kube-api-access-ts8cg") pod "5d86c28e-1a7a-4dad-8792-01bfd513f571" (UID: "5d86c28e-1a7a-4dad-8792-01bfd513f571"). InnerVolumeSpecName "kube-api-access-ts8cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.441112 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5d86c28e-1a7a-4dad-8792-01bfd513f571" (UID: "5d86c28e-1a7a-4dad-8792-01bfd513f571"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.444266 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d86c28e-1a7a-4dad-8792-01bfd513f571" (UID: "5d86c28e-1a7a-4dad-8792-01bfd513f571"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.538163 4660 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.538199 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.538213 4660 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d86c28e-1a7a-4dad-8792-01bfd513f571-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.538224 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts8cg\" (UniqueName: \"kubernetes.io/projected/5d86c28e-1a7a-4dad-8792-01bfd513f571-kube-api-access-ts8cg\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.561026 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5xg97"] Nov 29 07:37:07 crc kubenswrapper[4660]: W1129 07:37:07.564711 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd487e762_0eca_4f42_aae2_1b8674868db1.slice/crio-c99641f6097dbf18b5677df2ec03c239a432a5d723198db8836f03ad601e53e4 WatchSource:0}: Error finding container c99641f6097dbf18b5677df2ec03c239a432a5d723198db8836f03ad601e53e4: Status 404 returned error can't find the container with id c99641f6097dbf18b5677df2ec03c239a432a5d723198db8836f03ad601e53e4 Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.702233 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc692c38-a11e-470c-98e4-d6df1a7b2aff" path="/var/lib/kubelet/pods/dc692c38-a11e-470c-98e4-d6df1a7b2aff/volumes" Nov 29 07:37:07 crc kubenswrapper[4660]: I1129 07:37:07.740191 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:07 crc kubenswrapper[4660]: E1129 07:37:07.740640 4660 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:37:07 crc kubenswrapper[4660]: E1129 07:37:07.740668 4660 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:37:07 crc kubenswrapper[4660]: E1129 07:37:07.740721 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift podName:1875d22e-2809-4d96-9cb9-bac77320c5a3 nodeName:}" failed. No retries permitted until 2025-11-29 07:37:09.740702002 +0000 UTC m=+1320.294231911 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift") pod "swift-storage-0" (UID: "1875d22e-2809-4d96-9cb9-bac77320c5a3") : configmap "swift-ring-files" not found Nov 29 07:37:08 crc kubenswrapper[4660]: I1129 07:37:08.299717 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9gk46" Nov 29 07:37:08 crc kubenswrapper[4660]: I1129 07:37:08.299726 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5xg97" event={"ID":"d487e762-0eca-4f42-aae2-1b8674868db1","Type":"ContainerStarted","Data":"c99641f6097dbf18b5677df2ec03c239a432a5d723198db8836f03ad601e53e4"} Nov 29 07:37:08 crc kubenswrapper[4660]: I1129 07:37:08.351465 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-9gk46"] Nov 29 07:37:08 crc kubenswrapper[4660]: I1129 07:37:08.357953 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-9gk46"] Nov 29 07:37:09 crc kubenswrapper[4660]: I1129 07:37:09.719825 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d86c28e-1a7a-4dad-8792-01bfd513f571" path="/var/lib/kubelet/pods/5d86c28e-1a7a-4dad-8792-01bfd513f571/volumes" Nov 29 07:37:09 crc kubenswrapper[4660]: I1129 07:37:09.772535 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:09 crc kubenswrapper[4660]: E1129 07:37:09.772770 4660 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:37:09 crc kubenswrapper[4660]: E1129 07:37:09.772796 4660 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:37:09 crc kubenswrapper[4660]: E1129 07:37:09.772851 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift podName:1875d22e-2809-4d96-9cb9-bac77320c5a3 nodeName:}" failed. No retries permitted until 2025-11-29 07:37:13.772836246 +0000 UTC m=+1324.326366145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift") pod "swift-storage-0" (UID: "1875d22e-2809-4d96-9cb9-bac77320c5a3") : configmap "swift-ring-files" not found Nov 29 07:37:11 crc kubenswrapper[4660]: I1129 07:37:11.197401 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 29 07:37:11 crc kubenswrapper[4660]: I1129 07:37:11.197488 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 29 07:37:11 crc kubenswrapper[4660]: I1129 07:37:11.279375 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 29 07:37:11 crc kubenswrapper[4660]: I1129 07:37:11.399502 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 29 07:37:11 crc kubenswrapper[4660]: I1129 07:37:11.787930 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.278556 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b98-account-create-update-jxkfn"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.279746 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.281506 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.281533 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.282228 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.292961 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b98-account-create-update-jxkfn"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.319493 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5sg7b"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.320989 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.322890 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xm8s\" (UniqueName: \"kubernetes.io/projected/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-kube-api-access-2xm8s\") pod \"keystone-db-create-5sg7b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.326748 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-operator-scripts\") pod \"keystone-db-create-5sg7b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.351186 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5sg7b"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.401709 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.428693 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khd2z\" (UniqueName: \"kubernetes.io/projected/61209158-65d1-44a1-84bb-3b2f98b5566f-kube-api-access-khd2z\") pod \"keystone-7b98-account-create-update-jxkfn\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.428727 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61209158-65d1-44a1-84bb-3b2f98b5566f-operator-scripts\") pod \"keystone-7b98-account-create-update-jxkfn\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.428770 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xm8s\" (UniqueName: \"kubernetes.io/projected/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-kube-api-access-2xm8s\") pod \"keystone-db-create-5sg7b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.428821 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-operator-scripts\") pod \"keystone-db-create-5sg7b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.430496 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-operator-scripts\") pod \"keystone-db-create-5sg7b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.458241 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xm8s\" (UniqueName: \"kubernetes.io/projected/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-kube-api-access-2xm8s\") pod \"keystone-db-create-5sg7b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.499565 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.532053 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khd2z\" (UniqueName: \"kubernetes.io/projected/61209158-65d1-44a1-84bb-3b2f98b5566f-kube-api-access-khd2z\") pod \"keystone-7b98-account-create-update-jxkfn\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.532100 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61209158-65d1-44a1-84bb-3b2f98b5566f-operator-scripts\") pod \"keystone-7b98-account-create-update-jxkfn\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.532867 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61209158-65d1-44a1-84bb-3b2f98b5566f-operator-scripts\") pod \"keystone-7b98-account-create-update-jxkfn\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.566156 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khd2z\" (UniqueName: \"kubernetes.io/projected/61209158-65d1-44a1-84bb-3b2f98b5566f-kube-api-access-khd2z\") pod \"keystone-7b98-account-create-update-jxkfn\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.596486 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b9jkx"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.601501 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.601918 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.621649 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b9jkx"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.651564 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.677945 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4352-account-create-update-l6jvk"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.678983 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.683546 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.688535 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4352-account-create-update-l6jvk"] Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.734871 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdc64a4-27cd-4937-8d01-9e2742b75db5-operator-scripts\") pod \"placement-db-create-b9jkx\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.734962 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm65t\" (UniqueName: \"kubernetes.io/projected/7bdc64a4-27cd-4937-8d01-9e2742b75db5-kube-api-access-fm65t\") pod \"placement-db-create-b9jkx\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.836637 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgl4j\" (UniqueName: \"kubernetes.io/projected/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-kube-api-access-kgl4j\") pod \"placement-4352-account-create-update-l6jvk\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.836745 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-operator-scripts\") pod \"placement-4352-account-create-update-l6jvk\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.836830 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdc64a4-27cd-4937-8d01-9e2742b75db5-operator-scripts\") pod \"placement-db-create-b9jkx\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.836852 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm65t\" (UniqueName: \"kubernetes.io/projected/7bdc64a4-27cd-4937-8d01-9e2742b75db5-kube-api-access-fm65t\") pod \"placement-db-create-b9jkx\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.837680 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdc64a4-27cd-4937-8d01-9e2742b75db5-operator-scripts\") pod \"placement-db-create-b9jkx\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.853433 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm65t\" (UniqueName: \"kubernetes.io/projected/7bdc64a4-27cd-4937-8d01-9e2742b75db5-kube-api-access-fm65t\") pod \"placement-db-create-b9jkx\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.928401 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.938096 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgl4j\" (UniqueName: \"kubernetes.io/projected/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-kube-api-access-kgl4j\") pod \"placement-4352-account-create-update-l6jvk\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.938175 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-operator-scripts\") pod \"placement-4352-account-create-update-l6jvk\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.938814 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-operator-scripts\") pod \"placement-4352-account-create-update-l6jvk\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.970221 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgl4j\" (UniqueName: \"kubernetes.io/projected/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-kube-api-access-kgl4j\") pod \"placement-4352-account-create-update-l6jvk\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:12 crc kubenswrapper[4660]: I1129 07:37:12.992003 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:13 crc kubenswrapper[4660]: I1129 07:37:13.864447 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:13 crc kubenswrapper[4660]: E1129 07:37:13.864684 4660 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:37:13 crc kubenswrapper[4660]: E1129 07:37:13.865102 4660 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:37:13 crc kubenswrapper[4660]: E1129 07:37:13.865151 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift podName:1875d22e-2809-4d96-9cb9-bac77320c5a3 nodeName:}" failed. No retries permitted until 2025-11-29 07:37:21.865134968 +0000 UTC m=+1332.418664867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift") pod "swift-storage-0" (UID: "1875d22e-2809-4d96-9cb9-bac77320c5a3") : configmap "swift-ring-files" not found Nov 29 07:37:13 crc kubenswrapper[4660]: I1129 07:37:13.922846 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b9jkx"] Nov 29 07:37:13 crc kubenswrapper[4660]: W1129 07:37:13.924312 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ae65a4_7c1b_41bb_b242_9229ddaa0e6b.slice/crio-088fd2e19829ad329b3c98f298ed31f01d027d7b9419f57dd2404e596ef4621e WatchSource:0}: Error finding container 088fd2e19829ad329b3c98f298ed31f01d027d7b9419f57dd2404e596ef4621e: Status 404 returned error can't find the container with id 088fd2e19829ad329b3c98f298ed31f01d027d7b9419f57dd2404e596ef4621e Nov 29 07:37:13 crc kubenswrapper[4660]: I1129 07:37:13.933043 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5sg7b"] Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.100159 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4352-account-create-update-l6jvk"] Nov 29 07:37:14 crc kubenswrapper[4660]: W1129 07:37:14.107019 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61209158_65d1_44a1_84bb_3b2f98b5566f.slice/crio-16c2426de5e8fc696ba5173f5ee881f95cac2ba5cd2b0e398b1ad39377d4ac4c WatchSource:0}: Error finding container 16c2426de5e8fc696ba5173f5ee881f95cac2ba5cd2b0e398b1ad39377d4ac4c: Status 404 returned error can't find the container with id 16c2426de5e8fc696ba5173f5ee881f95cac2ba5cd2b0e398b1ad39377d4ac4c Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.108083 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b98-account-create-update-jxkfn"] Nov 29 07:37:14 crc kubenswrapper[4660]: W1129 07:37:14.109232 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod483004e9_a9b0_4ea7_96b4_a7aaed456ac7.slice/crio-79b2cfbf3d5ab407c16a6722aa3082e168d645183eb48970ada0b868f3df237d WatchSource:0}: Error finding container 79b2cfbf3d5ab407c16a6722aa3082e168d645183eb48970ada0b868f3df237d: Status 404 returned error can't find the container with id 79b2cfbf3d5ab407c16a6722aa3082e168d645183eb48970ada0b868f3df237d Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.376032 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b9jkx" event={"ID":"7bdc64a4-27cd-4937-8d01-9e2742b75db5","Type":"ContainerStarted","Data":"3f394f02b632c5f493909426cd0ed07d9ab249959bc717e0d0dce5ee21fd8a8d"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.376081 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b9jkx" event={"ID":"7bdc64a4-27cd-4937-8d01-9e2742b75db5","Type":"ContainerStarted","Data":"bc8dcac6afc4b3673fe0414c8859a3a06612419086447fa506a755a4da25867e"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.379097 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5sg7b" event={"ID":"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b","Type":"ContainerStarted","Data":"3f64171e907bb072da9567e9ad8774eda9920295192fe0fde3a6a2e2f0ba6780"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.379135 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5sg7b" event={"ID":"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b","Type":"ContainerStarted","Data":"088fd2e19829ad329b3c98f298ed31f01d027d7b9419f57dd2404e596ef4621e"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.381418 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b98-account-create-update-jxkfn" event={"ID":"61209158-65d1-44a1-84bb-3b2f98b5566f","Type":"ContainerStarted","Data":"74e1fcf3cb91dcdbeb197346d7b1095c1ea6c2cec35af3d8a7a81454bf52e551"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.381469 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b98-account-create-update-jxkfn" event={"ID":"61209158-65d1-44a1-84bb-3b2f98b5566f","Type":"ContainerStarted","Data":"16c2426de5e8fc696ba5173f5ee881f95cac2ba5cd2b0e398b1ad39377d4ac4c"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.384031 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4352-account-create-update-l6jvk" event={"ID":"483004e9-a9b0-4ea7-96b4-a7aaed456ac7","Type":"ContainerStarted","Data":"79b82c321d3981911b4e5f46791675e2fad11eef618fddabb1e59981332c4b91"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.384089 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4352-account-create-update-l6jvk" event={"ID":"483004e9-a9b0-4ea7-96b4-a7aaed456ac7","Type":"ContainerStarted","Data":"79b2cfbf3d5ab407c16a6722aa3082e168d645183eb48970ada0b868f3df237d"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.386824 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5xg97" event={"ID":"d487e762-0eca-4f42-aae2-1b8674868db1","Type":"ContainerStarted","Data":"66d7327e477d4e231d23b447135b3bd0a79dee481215d7dae458f1547ef4159b"} Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.405339 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-b9jkx" podStartSLOduration=2.405319097 podStartE2EDuration="2.405319097s" podCreationTimestamp="2025-11-29 07:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:14.398856758 +0000 UTC m=+1324.952386657" watchObservedRunningTime="2025-11-29 07:37:14.405319097 +0000 UTC m=+1324.958848986" Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.419315 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5sg7b" podStartSLOduration=2.419295133 podStartE2EDuration="2.419295133s" podCreationTimestamp="2025-11-29 07:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:14.415124968 +0000 UTC m=+1324.968654877" watchObservedRunningTime="2025-11-29 07:37:14.419295133 +0000 UTC m=+1324.972825032" Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.436409 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b98-account-create-update-jxkfn" podStartSLOduration=2.436383844 podStartE2EDuration="2.436383844s" podCreationTimestamp="2025-11-29 07:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:14.432161788 +0000 UTC m=+1324.985691687" watchObservedRunningTime="2025-11-29 07:37:14.436383844 +0000 UTC m=+1324.989913753" Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.479558 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5xg97" podStartSLOduration=2.652698992 podStartE2EDuration="8.479540016s" podCreationTimestamp="2025-11-29 07:37:06 +0000 UTC" firstStartedPulling="2025-11-29 07:37:07.567214894 +0000 UTC m=+1318.120744783" lastFinishedPulling="2025-11-29 07:37:13.394055908 +0000 UTC m=+1323.947585807" observedRunningTime="2025-11-29 07:37:14.46197382 +0000 UTC m=+1325.015503729" watchObservedRunningTime="2025-11-29 07:37:14.479540016 +0000 UTC m=+1325.033069915" Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.483836 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4352-account-create-update-l6jvk" podStartSLOduration=2.483820654 podStartE2EDuration="2.483820654s" podCreationTimestamp="2025-11-29 07:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:14.478795645 +0000 UTC m=+1325.032325564" watchObservedRunningTime="2025-11-29 07:37:14.483820654 +0000 UTC m=+1325.037350553" Nov 29 07:37:14 crc kubenswrapper[4660]: I1129 07:37:14.756309 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.185159 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.256858 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vzsmd"] Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.257078 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vzsmd" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerName="dnsmasq-dns" containerID="cri-o://ed98c9c32f393d428dd174dec0e7f42ae7aa20254686eb48db8d25a16fa58b98" gracePeriod=10 Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.407309 4660 generic.go:334] "Generic (PLEG): container finished" podID="483004e9-a9b0-4ea7-96b4-a7aaed456ac7" containerID="79b82c321d3981911b4e5f46791675e2fad11eef618fddabb1e59981332c4b91" exitCode=0 Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.407379 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4352-account-create-update-l6jvk" event={"ID":"483004e9-a9b0-4ea7-96b4-a7aaed456ac7","Type":"ContainerDied","Data":"79b82c321d3981911b4e5f46791675e2fad11eef618fddabb1e59981332c4b91"} Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.411172 4660 generic.go:334] "Generic (PLEG): container finished" podID="7bdc64a4-27cd-4937-8d01-9e2742b75db5" containerID="3f394f02b632c5f493909426cd0ed07d9ab249959bc717e0d0dce5ee21fd8a8d" exitCode=0 Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.411286 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b9jkx" event={"ID":"7bdc64a4-27cd-4937-8d01-9e2742b75db5","Type":"ContainerDied","Data":"3f394f02b632c5f493909426cd0ed07d9ab249959bc717e0d0dce5ee21fd8a8d"} Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.412978 4660 generic.go:334] "Generic (PLEG): container finished" podID="48ae65a4-7c1b-41bb-b242-9229ddaa0e6b" containerID="3f64171e907bb072da9567e9ad8774eda9920295192fe0fde3a6a2e2f0ba6780" exitCode=0 Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.413070 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5sg7b" event={"ID":"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b","Type":"ContainerDied","Data":"3f64171e907bb072da9567e9ad8774eda9920295192fe0fde3a6a2e2f0ba6780"} Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.414231 4660 generic.go:334] "Generic (PLEG): container finished" podID="61209158-65d1-44a1-84bb-3b2f98b5566f" containerID="74e1fcf3cb91dcdbeb197346d7b1095c1ea6c2cec35af3d8a7a81454bf52e551" exitCode=0 Nov 29 07:37:15 crc kubenswrapper[4660]: I1129 07:37:15.414361 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b98-account-create-update-jxkfn" event={"ID":"61209158-65d1-44a1-84bb-3b2f98b5566f","Type":"ContainerDied","Data":"74e1fcf3cb91dcdbeb197346d7b1095c1ea6c2cec35af3d8a7a81454bf52e551"} Nov 29 07:37:16 crc kubenswrapper[4660]: I1129 07:37:16.154892 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-vzsmd" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 29 07:37:16 crc kubenswrapper[4660]: I1129 07:37:16.423328 4660 generic.go:334] "Generic (PLEG): container finished" podID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerID="ed98c9c32f393d428dd174dec0e7f42ae7aa20254686eb48db8d25a16fa58b98" exitCode=0 Nov 29 07:37:16 crc kubenswrapper[4660]: I1129 07:37:16.423539 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vzsmd" event={"ID":"d51cb0ce-5bfb-4755-a879-c83e3f552f55","Type":"ContainerDied","Data":"ed98c9c32f393d428dd174dec0e7f42ae7aa20254686eb48db8d25a16fa58b98"} Nov 29 07:37:16 crc kubenswrapper[4660]: I1129 07:37:16.890313 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:16 crc kubenswrapper[4660]: I1129 07:37:16.988195 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-operator-scripts\") pod \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " Nov 29 07:37:16 crc kubenswrapper[4660]: I1129 07:37:16.988318 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgl4j\" (UniqueName: \"kubernetes.io/projected/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-kube-api-access-kgl4j\") pod \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\" (UID: \"483004e9-a9b0-4ea7-96b4-a7aaed456ac7\") " Nov 29 07:37:16 crc kubenswrapper[4660]: I1129 07:37:16.989981 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "483004e9-a9b0-4ea7-96b4-a7aaed456ac7" (UID: "483004e9-a9b0-4ea7-96b4-a7aaed456ac7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.008331 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-kube-api-access-kgl4j" (OuterVolumeSpecName: "kube-api-access-kgl4j") pod "483004e9-a9b0-4ea7-96b4-a7aaed456ac7" (UID: "483004e9-a9b0-4ea7-96b4-a7aaed456ac7"). InnerVolumeSpecName "kube-api-access-kgl4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.072093 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.093214 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdc64a4-27cd-4937-8d01-9e2742b75db5-operator-scripts\") pod \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.093795 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm65t\" (UniqueName: \"kubernetes.io/projected/7bdc64a4-27cd-4937-8d01-9e2742b75db5-kube-api-access-fm65t\") pod \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\" (UID: \"7bdc64a4-27cd-4937-8d01-9e2742b75db5\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.094121 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgl4j\" (UniqueName: \"kubernetes.io/projected/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-kube-api-access-kgl4j\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.094133 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483004e9-a9b0-4ea7-96b4-a7aaed456ac7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.094501 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bdc64a4-27cd-4937-8d01-9e2742b75db5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bdc64a4-27cd-4937-8d01-9e2742b75db5" (UID: "7bdc64a4-27cd-4937-8d01-9e2742b75db5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.094866 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.097315 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdc64a4-27cd-4937-8d01-9e2742b75db5-kube-api-access-fm65t" (OuterVolumeSpecName: "kube-api-access-fm65t") pod "7bdc64a4-27cd-4937-8d01-9e2742b75db5" (UID: "7bdc64a4-27cd-4937-8d01-9e2742b75db5"). InnerVolumeSpecName "kube-api-access-fm65t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.134291 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.194487 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-operator-scripts\") pod \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.194812 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61209158-65d1-44a1-84bb-3b2f98b5566f-operator-scripts\") pod \"61209158-65d1-44a1-84bb-3b2f98b5566f\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.194937 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khd2z\" (UniqueName: \"kubernetes.io/projected/61209158-65d1-44a1-84bb-3b2f98b5566f-kube-api-access-khd2z\") pod \"61209158-65d1-44a1-84bb-3b2f98b5566f\" (UID: \"61209158-65d1-44a1-84bb-3b2f98b5566f\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.195050 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xm8s\" (UniqueName: \"kubernetes.io/projected/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-kube-api-access-2xm8s\") pod \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\" (UID: \"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.195415 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ae65a4-7c1b-41bb-b242-9229ddaa0e6b" (UID: "48ae65a4-7c1b-41bb-b242-9229ddaa0e6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.195579 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm65t\" (UniqueName: \"kubernetes.io/projected/7bdc64a4-27cd-4937-8d01-9e2742b75db5-kube-api-access-fm65t\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.195790 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdc64a4-27cd-4937-8d01-9e2742b75db5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.196154 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61209158-65d1-44a1-84bb-3b2f98b5566f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61209158-65d1-44a1-84bb-3b2f98b5566f" (UID: "61209158-65d1-44a1-84bb-3b2f98b5566f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.199358 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-kube-api-access-2xm8s" (OuterVolumeSpecName: "kube-api-access-2xm8s") pod "48ae65a4-7c1b-41bb-b242-9229ddaa0e6b" (UID: "48ae65a4-7c1b-41bb-b242-9229ddaa0e6b"). InnerVolumeSpecName "kube-api-access-2xm8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.200153 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61209158-65d1-44a1-84bb-3b2f98b5566f-kube-api-access-khd2z" (OuterVolumeSpecName: "kube-api-access-khd2z") pod "61209158-65d1-44a1-84bb-3b2f98b5566f" (UID: "61209158-65d1-44a1-84bb-3b2f98b5566f"). InnerVolumeSpecName "kube-api-access-khd2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.215479 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.296517 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-config\") pod \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.296955 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-sb\") pod \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.297050 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-nb\") pod \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.297117 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-dns-svc\") pod \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.297236 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgbvd\" (UniqueName: \"kubernetes.io/projected/d51cb0ce-5bfb-4755-a879-c83e3f552f55-kube-api-access-dgbvd\") pod \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\" (UID: \"d51cb0ce-5bfb-4755-a879-c83e3f552f55\") " Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.297723 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.297741 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61209158-65d1-44a1-84bb-3b2f98b5566f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.297750 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khd2z\" (UniqueName: \"kubernetes.io/projected/61209158-65d1-44a1-84bb-3b2f98b5566f-kube-api-access-khd2z\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.297761 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xm8s\" (UniqueName: \"kubernetes.io/projected/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b-kube-api-access-2xm8s\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.309046 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51cb0ce-5bfb-4755-a879-c83e3f552f55-kube-api-access-dgbvd" (OuterVolumeSpecName: "kube-api-access-dgbvd") pod "d51cb0ce-5bfb-4755-a879-c83e3f552f55" (UID: "d51cb0ce-5bfb-4755-a879-c83e3f552f55"). InnerVolumeSpecName "kube-api-access-dgbvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.346005 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d51cb0ce-5bfb-4755-a879-c83e3f552f55" (UID: "d51cb0ce-5bfb-4755-a879-c83e3f552f55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.347524 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-config" (OuterVolumeSpecName: "config") pod "d51cb0ce-5bfb-4755-a879-c83e3f552f55" (UID: "d51cb0ce-5bfb-4755-a879-c83e3f552f55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.350067 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d51cb0ce-5bfb-4755-a879-c83e3f552f55" (UID: "d51cb0ce-5bfb-4755-a879-c83e3f552f55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.352960 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d51cb0ce-5bfb-4755-a879-c83e3f552f55" (UID: "d51cb0ce-5bfb-4755-a879-c83e3f552f55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.399131 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgbvd\" (UniqueName: \"kubernetes.io/projected/d51cb0ce-5bfb-4755-a879-c83e3f552f55-kube-api-access-dgbvd\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.399421 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.399430 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.399440 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.399448 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d51cb0ce-5bfb-4755-a879-c83e3f552f55-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.434645 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4352-account-create-update-l6jvk" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.434599 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4352-account-create-update-l6jvk" event={"ID":"483004e9-a9b0-4ea7-96b4-a7aaed456ac7","Type":"ContainerDied","Data":"79b2cfbf3d5ab407c16a6722aa3082e168d645183eb48970ada0b868f3df237d"} Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.434731 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b2cfbf3d5ab407c16a6722aa3082e168d645183eb48970ada0b868f3df237d" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.437170 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b9jkx" event={"ID":"7bdc64a4-27cd-4937-8d01-9e2742b75db5","Type":"ContainerDied","Data":"bc8dcac6afc4b3673fe0414c8859a3a06612419086447fa506a755a4da25867e"} Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.437207 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b9jkx" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.437210 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc8dcac6afc4b3673fe0414c8859a3a06612419086447fa506a755a4da25867e" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.439797 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vzsmd" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.440436 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vzsmd" event={"ID":"d51cb0ce-5bfb-4755-a879-c83e3f552f55","Type":"ContainerDied","Data":"853f9bcef3cce0a96239f04883a8ed7e160cddc19e02d153d538b79a001f4cc6"} Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.440481 4660 scope.go:117] "RemoveContainer" containerID="ed98c9c32f393d428dd174dec0e7f42ae7aa20254686eb48db8d25a16fa58b98" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.442667 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5sg7b" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.442666 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5sg7b" event={"ID":"48ae65a4-7c1b-41bb-b242-9229ddaa0e6b","Type":"ContainerDied","Data":"088fd2e19829ad329b3c98f298ed31f01d027d7b9419f57dd2404e596ef4621e"} Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.442883 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="088fd2e19829ad329b3c98f298ed31f01d027d7b9419f57dd2404e596ef4621e" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.445669 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b98-account-create-update-jxkfn" event={"ID":"61209158-65d1-44a1-84bb-3b2f98b5566f","Type":"ContainerDied","Data":"16c2426de5e8fc696ba5173f5ee881f95cac2ba5cd2b0e398b1ad39377d4ac4c"} Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.445689 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c2426de5e8fc696ba5173f5ee881f95cac2ba5cd2b0e398b1ad39377d4ac4c" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.445732 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b98-account-create-update-jxkfn" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.462965 4660 scope.go:117] "RemoveContainer" containerID="330551ae07cec752b703b8b7ea36757a273c38201a550390609405d1e0802997" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.509640 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vzsmd"] Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.515817 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vzsmd"] Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.704120 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" path="/var/lib/kubelet/pods/d51cb0ce-5bfb-4755-a879-c83e3f552f55/volumes" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.898543 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-p8r6k"] Nov 29 07:37:17 crc kubenswrapper[4660]: E1129 07:37:17.898969 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ae65a4-7c1b-41bb-b242-9229ddaa0e6b" containerName="mariadb-database-create" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.898993 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ae65a4-7c1b-41bb-b242-9229ddaa0e6b" containerName="mariadb-database-create" Nov 29 07:37:17 crc kubenswrapper[4660]: E1129 07:37:17.899010 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483004e9-a9b0-4ea7-96b4-a7aaed456ac7" containerName="mariadb-account-create-update" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899019 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="483004e9-a9b0-4ea7-96b4-a7aaed456ac7" containerName="mariadb-account-create-update" Nov 29 07:37:17 crc kubenswrapper[4660]: E1129 07:37:17.899036 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61209158-65d1-44a1-84bb-3b2f98b5566f" containerName="mariadb-account-create-update" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899047 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="61209158-65d1-44a1-84bb-3b2f98b5566f" containerName="mariadb-account-create-update" Nov 29 07:37:17 crc kubenswrapper[4660]: E1129 07:37:17.899062 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerName="dnsmasq-dns" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899088 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerName="dnsmasq-dns" Nov 29 07:37:17 crc kubenswrapper[4660]: E1129 07:37:17.899120 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdc64a4-27cd-4937-8d01-9e2742b75db5" containerName="mariadb-database-create" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899129 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdc64a4-27cd-4937-8d01-9e2742b75db5" containerName="mariadb-database-create" Nov 29 07:37:17 crc kubenswrapper[4660]: E1129 07:37:17.899148 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerName="init" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899156 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerName="init" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899339 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ae65a4-7c1b-41bb-b242-9229ddaa0e6b" containerName="mariadb-database-create" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899356 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="483004e9-a9b0-4ea7-96b4-a7aaed456ac7" containerName="mariadb-account-create-update" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899370 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51cb0ce-5bfb-4755-a879-c83e3f552f55" containerName="dnsmasq-dns" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899391 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="61209158-65d1-44a1-84bb-3b2f98b5566f" containerName="mariadb-account-create-update" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.899411 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bdc64a4-27cd-4937-8d01-9e2742b75db5" containerName="mariadb-database-create" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.901341 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:17 crc kubenswrapper[4660]: I1129 07:37:17.906555 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p8r6k"] Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.010116 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b63f-account-create-update-qh5x7"] Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.011104 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.011365 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff6cea5-6bff-44ad-b843-fb7572478416-operator-scripts\") pod \"glance-db-create-p8r6k\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.011759 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdfr\" (UniqueName: \"kubernetes.io/projected/1ff6cea5-6bff-44ad-b843-fb7572478416-kube-api-access-8bdfr\") pod \"glance-db-create-p8r6k\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.013767 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.025369 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b63f-account-create-update-qh5x7"] Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.113115 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdfr\" (UniqueName: \"kubernetes.io/projected/1ff6cea5-6bff-44ad-b843-fb7572478416-kube-api-access-8bdfr\") pod \"glance-db-create-p8r6k\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.113178 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2c1811e-310d-4d94-8a31-14421da00093-operator-scripts\") pod \"glance-b63f-account-create-update-qh5x7\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.113216 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhrw6\" (UniqueName: \"kubernetes.io/projected/a2c1811e-310d-4d94-8a31-14421da00093-kube-api-access-jhrw6\") pod \"glance-b63f-account-create-update-qh5x7\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.113244 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff6cea5-6bff-44ad-b843-fb7572478416-operator-scripts\") pod \"glance-db-create-p8r6k\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.114144 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff6cea5-6bff-44ad-b843-fb7572478416-operator-scripts\") pod \"glance-db-create-p8r6k\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.132121 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdfr\" (UniqueName: \"kubernetes.io/projected/1ff6cea5-6bff-44ad-b843-fb7572478416-kube-api-access-8bdfr\") pod \"glance-db-create-p8r6k\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.214673 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2c1811e-310d-4d94-8a31-14421da00093-operator-scripts\") pod \"glance-b63f-account-create-update-qh5x7\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.214733 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhrw6\" (UniqueName: \"kubernetes.io/projected/a2c1811e-310d-4d94-8a31-14421da00093-kube-api-access-jhrw6\") pod \"glance-b63f-account-create-update-qh5x7\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.215701 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2c1811e-310d-4d94-8a31-14421da00093-operator-scripts\") pod \"glance-b63f-account-create-update-qh5x7\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.234066 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhrw6\" (UniqueName: \"kubernetes.io/projected/a2c1811e-310d-4d94-8a31-14421da00093-kube-api-access-jhrw6\") pod \"glance-b63f-account-create-update-qh5x7\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.241975 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.332693 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.461080 4660 generic.go:334] "Generic (PLEG): container finished" podID="0a408d44-6909-4748-9b8e-72da66b0afea" containerID="3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7" exitCode=0 Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.461365 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a408d44-6909-4748-9b8e-72da66b0afea","Type":"ContainerDied","Data":"3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7"} Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.485524 4660 generic.go:334] "Generic (PLEG): container finished" podID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerID="77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe" exitCode=0 Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.485567 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0604115a-3f3a-4061-bb63-ada6ebb5d458","Type":"ContainerDied","Data":"77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe"} Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.659985 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p8r6k"] Nov 29 07:37:18 crc kubenswrapper[4660]: W1129 07:37:18.664363 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ff6cea5_6bff_44ad_b843_fb7572478416.slice/crio-667d6707e9b66ed7553c3398b0058d3571bcf192fdc2e3a76b5ae05dd538bf6d WatchSource:0}: Error finding container 667d6707e9b66ed7553c3398b0058d3571bcf192fdc2e3a76b5ae05dd538bf6d: Status 404 returned error can't find the container with id 667d6707e9b66ed7553c3398b0058d3571bcf192fdc2e3a76b5ae05dd538bf6d Nov 29 07:37:18 crc kubenswrapper[4660]: I1129 07:37:18.820299 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b63f-account-create-update-qh5x7"] Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.397185 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xdz26" podUID="a75569c9-ce83-4515-894c-b067e01f3d9b" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:37:19 crc kubenswrapper[4660]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:37:19 crc kubenswrapper[4660]: > Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.495186 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p8r6k" event={"ID":"1ff6cea5-6bff-44ad-b843-fb7572478416","Type":"ContainerStarted","Data":"6a0986934544ead13faf262aca052f7033da16729893cce239036ff933eb52d7"} Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.495235 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p8r6k" event={"ID":"1ff6cea5-6bff-44ad-b843-fb7572478416","Type":"ContainerStarted","Data":"667d6707e9b66ed7553c3398b0058d3571bcf192fdc2e3a76b5ae05dd538bf6d"} Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.500181 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0604115a-3f3a-4061-bb63-ada6ebb5d458","Type":"ContainerStarted","Data":"542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6"} Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.500496 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.502169 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b63f-account-create-update-qh5x7" event={"ID":"a2c1811e-310d-4d94-8a31-14421da00093","Type":"ContainerStarted","Data":"6096b3c9fff58e366d196e93530f4deaa33ffcacdfa56f1bab32fb7dbedc8c72"} Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.502214 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b63f-account-create-update-qh5x7" event={"ID":"a2c1811e-310d-4d94-8a31-14421da00093","Type":"ContainerStarted","Data":"34cc583b94d6ed38ed6eb8387d6d054a5eec91e2bc3e61535613db7af93a69c7"} Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.504839 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a408d44-6909-4748-9b8e-72da66b0afea","Type":"ContainerStarted","Data":"354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b"} Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.505223 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.525916 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-p8r6k" podStartSLOduration=2.525891869 podStartE2EDuration="2.525891869s" podCreationTimestamp="2025-11-29 07:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:19.524985353 +0000 UTC m=+1330.078515252" watchObservedRunningTime="2025-11-29 07:37:19.525891869 +0000 UTC m=+1330.079421788" Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.553268 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.311830076 podStartE2EDuration="1m22.553246264s" podCreationTimestamp="2025-11-29 07:35:57 +0000 UTC" firstStartedPulling="2025-11-29 07:35:59.645337645 +0000 UTC m=+1250.198867554" lastFinishedPulling="2025-11-29 07:36:43.886753843 +0000 UTC m=+1294.440283742" observedRunningTime="2025-11-29 07:37:19.548533383 +0000 UTC m=+1330.102063302" watchObservedRunningTime="2025-11-29 07:37:19.553246264 +0000 UTC m=+1330.106776163" Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.583992 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.822021496 podStartE2EDuration="1m21.583977912s" podCreationTimestamp="2025-11-29 07:35:58 +0000 UTC" firstStartedPulling="2025-11-29 07:36:00.115755578 +0000 UTC m=+1250.669285467" lastFinishedPulling="2025-11-29 07:36:43.877711984 +0000 UTC m=+1294.431241883" observedRunningTime="2025-11-29 07:37:19.582581934 +0000 UTC m=+1330.136111833" watchObservedRunningTime="2025-11-29 07:37:19.583977912 +0000 UTC m=+1330.137507801" Nov 29 07:37:19 crc kubenswrapper[4660]: I1129 07:37:19.611180 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b63f-account-create-update-qh5x7" podStartSLOduration=2.611150572 podStartE2EDuration="2.611150572s" podCreationTimestamp="2025-11-29 07:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:37:19.604984012 +0000 UTC m=+1330.158513921" watchObservedRunningTime="2025-11-29 07:37:19.611150572 +0000 UTC m=+1330.164680471" Nov 29 07:37:20 crc kubenswrapper[4660]: I1129 07:37:20.513096 4660 generic.go:334] "Generic (PLEG): container finished" podID="1ff6cea5-6bff-44ad-b843-fb7572478416" containerID="6a0986934544ead13faf262aca052f7033da16729893cce239036ff933eb52d7" exitCode=0 Nov 29 07:37:20 crc kubenswrapper[4660]: I1129 07:37:20.513144 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p8r6k" event={"ID":"1ff6cea5-6bff-44ad-b843-fb7572478416","Type":"ContainerDied","Data":"6a0986934544ead13faf262aca052f7033da16729893cce239036ff933eb52d7"} Nov 29 07:37:21 crc kubenswrapper[4660]: I1129 07:37:21.521323 4660 generic.go:334] "Generic (PLEG): container finished" podID="a2c1811e-310d-4d94-8a31-14421da00093" containerID="6096b3c9fff58e366d196e93530f4deaa33ffcacdfa56f1bab32fb7dbedc8c72" exitCode=0 Nov 29 07:37:21 crc kubenswrapper[4660]: I1129 07:37:21.521535 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b63f-account-create-update-qh5x7" event={"ID":"a2c1811e-310d-4d94-8a31-14421da00093","Type":"ContainerDied","Data":"6096b3c9fff58e366d196e93530f4deaa33ffcacdfa56f1bab32fb7dbedc8c72"} Nov 29 07:37:21 crc kubenswrapper[4660]: I1129 07:37:21.892441 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:21 crc kubenswrapper[4660]: E1129 07:37:21.892599 4660 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 29 07:37:21 crc kubenswrapper[4660]: E1129 07:37:21.892817 4660 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 29 07:37:21 crc kubenswrapper[4660]: E1129 07:37:21.892873 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift podName:1875d22e-2809-4d96-9cb9-bac77320c5a3 nodeName:}" failed. No retries permitted until 2025-11-29 07:37:37.892854885 +0000 UTC m=+1348.446384784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift") pod "swift-storage-0" (UID: "1875d22e-2809-4d96-9cb9-bac77320c5a3") : configmap "swift-ring-files" not found Nov 29 07:37:21 crc kubenswrapper[4660]: I1129 07:37:21.915452 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:21 crc kubenswrapper[4660]: I1129 07:37:21.993880 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bdfr\" (UniqueName: \"kubernetes.io/projected/1ff6cea5-6bff-44ad-b843-fb7572478416-kube-api-access-8bdfr\") pod \"1ff6cea5-6bff-44ad-b843-fb7572478416\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " Nov 29 07:37:21 crc kubenswrapper[4660]: I1129 07:37:21.993989 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff6cea5-6bff-44ad-b843-fb7572478416-operator-scripts\") pod \"1ff6cea5-6bff-44ad-b843-fb7572478416\" (UID: \"1ff6cea5-6bff-44ad-b843-fb7572478416\") " Nov 29 07:37:21 crc kubenswrapper[4660]: I1129 07:37:21.994574 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff6cea5-6bff-44ad-b843-fb7572478416-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ff6cea5-6bff-44ad-b843-fb7572478416" (UID: "1ff6cea5-6bff-44ad-b843-fb7572478416"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:22 crc kubenswrapper[4660]: I1129 07:37:21.999859 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff6cea5-6bff-44ad-b843-fb7572478416-kube-api-access-8bdfr" (OuterVolumeSpecName: "kube-api-access-8bdfr") pod "1ff6cea5-6bff-44ad-b843-fb7572478416" (UID: "1ff6cea5-6bff-44ad-b843-fb7572478416"). InnerVolumeSpecName "kube-api-access-8bdfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:22 crc kubenswrapper[4660]: I1129 07:37:22.096231 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bdfr\" (UniqueName: \"kubernetes.io/projected/1ff6cea5-6bff-44ad-b843-fb7572478416-kube-api-access-8bdfr\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:22 crc kubenswrapper[4660]: I1129 07:37:22.096264 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff6cea5-6bff-44ad-b843-fb7572478416-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:22 crc kubenswrapper[4660]: I1129 07:37:22.531260 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p8r6k" Nov 29 07:37:22 crc kubenswrapper[4660]: I1129 07:37:22.531257 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p8r6k" event={"ID":"1ff6cea5-6bff-44ad-b843-fb7572478416","Type":"ContainerDied","Data":"667d6707e9b66ed7553c3398b0058d3571bcf192fdc2e3a76b5ae05dd538bf6d"} Nov 29 07:37:22 crc kubenswrapper[4660]: I1129 07:37:22.532674 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="667d6707e9b66ed7553c3398b0058d3571bcf192fdc2e3a76b5ae05dd538bf6d" Nov 29 07:37:22 crc kubenswrapper[4660]: I1129 07:37:22.885709 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.010107 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhrw6\" (UniqueName: \"kubernetes.io/projected/a2c1811e-310d-4d94-8a31-14421da00093-kube-api-access-jhrw6\") pod \"a2c1811e-310d-4d94-8a31-14421da00093\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.010201 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2c1811e-310d-4d94-8a31-14421da00093-operator-scripts\") pod \"a2c1811e-310d-4d94-8a31-14421da00093\" (UID: \"a2c1811e-310d-4d94-8a31-14421da00093\") " Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.011025 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2c1811e-310d-4d94-8a31-14421da00093-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2c1811e-310d-4d94-8a31-14421da00093" (UID: "a2c1811e-310d-4d94-8a31-14421da00093"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.017789 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c1811e-310d-4d94-8a31-14421da00093-kube-api-access-jhrw6" (OuterVolumeSpecName: "kube-api-access-jhrw6") pod "a2c1811e-310d-4d94-8a31-14421da00093" (UID: "a2c1811e-310d-4d94-8a31-14421da00093"). InnerVolumeSpecName "kube-api-access-jhrw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.111882 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhrw6\" (UniqueName: \"kubernetes.io/projected/a2c1811e-310d-4d94-8a31-14421da00093-kube-api-access-jhrw6\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.111913 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2c1811e-310d-4d94-8a31-14421da00093-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.540112 4660 generic.go:334] "Generic (PLEG): container finished" podID="d487e762-0eca-4f42-aae2-1b8674868db1" containerID="66d7327e477d4e231d23b447135b3bd0a79dee481215d7dae458f1547ef4159b" exitCode=0 Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.540194 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5xg97" event={"ID":"d487e762-0eca-4f42-aae2-1b8674868db1","Type":"ContainerDied","Data":"66d7327e477d4e231d23b447135b3bd0a79dee481215d7dae458f1547ef4159b"} Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.542725 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b63f-account-create-update-qh5x7" event={"ID":"a2c1811e-310d-4d94-8a31-14421da00093","Type":"ContainerDied","Data":"34cc583b94d6ed38ed6eb8387d6d054a5eec91e2bc3e61535613db7af93a69c7"} Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.542768 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34cc583b94d6ed38ed6eb8387d6d054a5eec91e2bc3e61535613db7af93a69c7" Nov 29 07:37:23 crc kubenswrapper[4660]: I1129 07:37:23.542793 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b63f-account-create-update-qh5x7" Nov 29 07:37:24 crc kubenswrapper[4660]: I1129 07:37:24.387167 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xdz26" podUID="a75569c9-ce83-4515-894c-b067e01f3d9b" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:37:24 crc kubenswrapper[4660]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:37:24 crc kubenswrapper[4660]: > Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.282438 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.294805 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rdslv" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.444033 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.559494 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5xg97" event={"ID":"d487e762-0eca-4f42-aae2-1b8674868db1","Type":"ContainerDied","Data":"c99641f6097dbf18b5677df2ec03c239a432a5d723198db8836f03ad601e53e4"} Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.559561 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99641f6097dbf18b5677df2ec03c239a432a5d723198db8836f03ad601e53e4" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.559700 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5xg97" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.590474 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-dispersionconf\") pod \"d487e762-0eca-4f42-aae2-1b8674868db1\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.590750 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-combined-ca-bundle\") pod \"d487e762-0eca-4f42-aae2-1b8674868db1\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.591554 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d487e762-0eca-4f42-aae2-1b8674868db1-etc-swift\") pod \"d487e762-0eca-4f42-aae2-1b8674868db1\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.592523 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d487e762-0eca-4f42-aae2-1b8674868db1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d487e762-0eca-4f42-aae2-1b8674868db1" (UID: "d487e762-0eca-4f42-aae2-1b8674868db1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.592745 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-scripts\") pod \"d487e762-0eca-4f42-aae2-1b8674868db1\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.592869 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-swiftconf\") pod \"d487e762-0eca-4f42-aae2-1b8674868db1\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.592918 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-ring-data-devices\") pod \"d487e762-0eca-4f42-aae2-1b8674868db1\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.592968 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5tqb\" (UniqueName: \"kubernetes.io/projected/d487e762-0eca-4f42-aae2-1b8674868db1-kube-api-access-h5tqb\") pod \"d487e762-0eca-4f42-aae2-1b8674868db1\" (UID: \"d487e762-0eca-4f42-aae2-1b8674868db1\") " Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.593522 4660 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d487e762-0eca-4f42-aae2-1b8674868db1-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.593870 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d487e762-0eca-4f42-aae2-1b8674868db1" (UID: "d487e762-0eca-4f42-aae2-1b8674868db1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.601041 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d487e762-0eca-4f42-aae2-1b8674868db1-kube-api-access-h5tqb" (OuterVolumeSpecName: "kube-api-access-h5tqb") pod "d487e762-0eca-4f42-aae2-1b8674868db1" (UID: "d487e762-0eca-4f42-aae2-1b8674868db1"). InnerVolumeSpecName "kube-api-access-h5tqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.608812 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d487e762-0eca-4f42-aae2-1b8674868db1" (UID: "d487e762-0eca-4f42-aae2-1b8674868db1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.620476 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d487e762-0eca-4f42-aae2-1b8674868db1" (UID: "d487e762-0eca-4f42-aae2-1b8674868db1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.633070 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d487e762-0eca-4f42-aae2-1b8674868db1" (UID: "d487e762-0eca-4f42-aae2-1b8674868db1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.638156 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-scripts" (OuterVolumeSpecName: "scripts") pod "d487e762-0eca-4f42-aae2-1b8674868db1" (UID: "d487e762-0eca-4f42-aae2-1b8674868db1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.695010 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.695046 4660 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.695060 4660 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d487e762-0eca-4f42-aae2-1b8674868db1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.695075 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5tqb\" (UniqueName: \"kubernetes.io/projected/d487e762-0eca-4f42-aae2-1b8674868db1-kube-api-access-h5tqb\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.695087 4660 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.695098 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d487e762-0eca-4f42-aae2-1b8674868db1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.989819 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xdz26-config-5sfg8"] Nov 29 07:37:25 crc kubenswrapper[4660]: E1129 07:37:25.990240 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d487e762-0eca-4f42-aae2-1b8674868db1" containerName="swift-ring-rebalance" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.990259 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d487e762-0eca-4f42-aae2-1b8674868db1" containerName="swift-ring-rebalance" Nov 29 07:37:25 crc kubenswrapper[4660]: E1129 07:37:25.990273 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff6cea5-6bff-44ad-b843-fb7572478416" containerName="mariadb-database-create" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.990283 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff6cea5-6bff-44ad-b843-fb7572478416" containerName="mariadb-database-create" Nov 29 07:37:25 crc kubenswrapper[4660]: E1129 07:37:25.990306 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c1811e-310d-4d94-8a31-14421da00093" containerName="mariadb-account-create-update" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.990314 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c1811e-310d-4d94-8a31-14421da00093" containerName="mariadb-account-create-update" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.990503 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d487e762-0eca-4f42-aae2-1b8674868db1" containerName="swift-ring-rebalance" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.990530 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff6cea5-6bff-44ad-b843-fb7572478416" containerName="mariadb-database-create" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.990539 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c1811e-310d-4d94-8a31-14421da00093" containerName="mariadb-account-create-update" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.991170 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:25 crc kubenswrapper[4660]: I1129 07:37:25.994446 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.014081 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xdz26-config-5sfg8"] Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.100465 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.100591 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-scripts\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.100659 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-log-ovn\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.100688 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzv8k\" (UniqueName: \"kubernetes.io/projected/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-kube-api-access-fzv8k\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.100715 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-additional-scripts\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.100823 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run-ovn\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.202495 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-scripts\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.202575 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-log-ovn\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.202604 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzv8k\" (UniqueName: \"kubernetes.io/projected/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-kube-api-access-fzv8k\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.202656 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-additional-scripts\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.202725 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run-ovn\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.202750 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.203031 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.203033 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-log-ovn\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.203301 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run-ovn\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.203778 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-additional-scripts\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.205263 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-scripts\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.225069 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzv8k\" (UniqueName: \"kubernetes.io/projected/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-kube-api-access-fzv8k\") pod \"ovn-controller-xdz26-config-5sfg8\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.308754 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:26 crc kubenswrapper[4660]: I1129 07:37:26.617101 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xdz26-config-5sfg8"] Nov 29 07:37:27 crc kubenswrapper[4660]: I1129 07:37:27.583491 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26-config-5sfg8" event={"ID":"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a","Type":"ContainerStarted","Data":"50cf0c3f3935ca2229667f1f3a3ba38d0fa356e89a6e960f9b7ffd5906d6027f"} Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.795049 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gcb27"] Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.796271 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.800761 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.801126 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h6jp5" Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.820263 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gcb27"] Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.960214 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-combined-ca-bundle\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.960264 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-config-data\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.960370 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-db-sync-config-data\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:28 crc kubenswrapper[4660]: I1129 07:37:28.960422 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm5xj\" (UniqueName: \"kubernetes.io/projected/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-kube-api-access-dm5xj\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.061607 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm5xj\" (UniqueName: \"kubernetes.io/projected/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-kube-api-access-dm5xj\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.061738 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-combined-ca-bundle\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.061763 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-config-data\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.061853 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-db-sync-config-data\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.067761 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-db-sync-config-data\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.067929 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-combined-ca-bundle\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.071692 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-config-data\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.086295 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm5xj\" (UniqueName: \"kubernetes.io/projected/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-kube-api-access-dm5xj\") pod \"glance-db-sync-gcb27\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.120536 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gcb27" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.150695 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.396473 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xdz26" podUID="a75569c9-ce83-4515-894c-b067e01f3d9b" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:37:29 crc kubenswrapper[4660]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:37:29 crc kubenswrapper[4660]: > Nov 29 07:37:29 crc kubenswrapper[4660]: I1129 07:37:29.808392 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 29 07:37:30 crc kubenswrapper[4660]: I1129 07:37:30.282667 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gcb27"] Nov 29 07:37:30 crc kubenswrapper[4660]: W1129 07:37:30.295722 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a4ba5b4_3360_458f_8de9_6c0630ad7cbf.slice/crio-2746c37f61f1c7bce2c5d98fd0d6f0d92b8181b7baaa3104ff5126a05125e600 WatchSource:0}: Error finding container 2746c37f61f1c7bce2c5d98fd0d6f0d92b8181b7baaa3104ff5126a05125e600: Status 404 returned error can't find the container with id 2746c37f61f1c7bce2c5d98fd0d6f0d92b8181b7baaa3104ff5126a05125e600 Nov 29 07:37:30 crc kubenswrapper[4660]: I1129 07:37:30.611591 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gcb27" event={"ID":"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf","Type":"ContainerStarted","Data":"2746c37f61f1c7bce2c5d98fd0d6f0d92b8181b7baaa3104ff5126a05125e600"} Nov 29 07:37:34 crc kubenswrapper[4660]: I1129 07:37:34.403215 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xdz26" podUID="a75569c9-ce83-4515-894c-b067e01f3d9b" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:37:34 crc kubenswrapper[4660]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:37:34 crc kubenswrapper[4660]: > Nov 29 07:37:35 crc kubenswrapper[4660]: I1129 07:37:35.500227 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:37:35 crc kubenswrapper[4660]: I1129 07:37:35.500577 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:37:37 crc kubenswrapper[4660]: I1129 07:37:37.928347 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:37 crc kubenswrapper[4660]: I1129 07:37:37.936430 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1875d22e-2809-4d96-9cb9-bac77320c5a3-etc-swift\") pod \"swift-storage-0\" (UID: \"1875d22e-2809-4d96-9cb9-bac77320c5a3\") " pod="openstack/swift-storage-0" Nov 29 07:37:38 crc kubenswrapper[4660]: I1129 07:37:38.085959 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 29 07:37:38 crc kubenswrapper[4660]: I1129 07:37:38.601054 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 29 07:37:38 crc kubenswrapper[4660]: W1129 07:37:38.607180 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1875d22e_2809_4d96_9cb9_bac77320c5a3.slice/crio-3d2588dcde755036af2cad920abc0d6e9f011e48c77c40391d0d9c6f29fb822a WatchSource:0}: Error finding container 3d2588dcde755036af2cad920abc0d6e9f011e48c77c40391d0d9c6f29fb822a: Status 404 returned error can't find the container with id 3d2588dcde755036af2cad920abc0d6e9f011e48c77c40391d0d9c6f29fb822a Nov 29 07:37:38 crc kubenswrapper[4660]: I1129 07:37:38.674933 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"3d2588dcde755036af2cad920abc0d6e9f011e48c77c40391d0d9c6f29fb822a"} Nov 29 07:37:39 crc kubenswrapper[4660]: I1129 07:37:39.146376 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 29 07:37:39 crc kubenswrapper[4660]: I1129 07:37:39.382728 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xdz26" podUID="a75569c9-ce83-4515-894c-b067e01f3d9b" containerName="ovn-controller" probeResult="failure" output=< Nov 29 07:37:39 crc kubenswrapper[4660]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 29 07:37:39 crc kubenswrapper[4660]: > Nov 29 07:37:39 crc kubenswrapper[4660]: I1129 07:37:39.806661 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 29 07:37:41 crc kubenswrapper[4660]: E1129 07:37:41.231991 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4222990293/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 29 07:37:41 crc kubenswrapper[4660]: E1129 07:37:41.233115 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm5xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gcb27_openstack(0a4ba5b4-3360-458f-8de9-6c0630ad7cbf): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4222990293/1\": happened during read: context canceled" logger="UnhandledError" Nov 29 07:37:41 crc kubenswrapper[4660]: E1129 07:37:41.234772 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage4222990293/1\\\": happened during read: context canceled\"" pod="openstack/glance-db-sync-gcb27" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" Nov 29 07:37:41 crc kubenswrapper[4660]: I1129 07:37:41.697975 4660 generic.go:334] "Generic (PLEG): container finished" podID="2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" containerID="3732f70fa3de06ae3fab7d1e6ecf3188a3303e1e9315d4418e2d0043eeb22b5b" exitCode=0 Nov 29 07:37:41 crc kubenswrapper[4660]: I1129 07:37:41.709386 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26-config-5sfg8" event={"ID":"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a","Type":"ContainerDied","Data":"3732f70fa3de06ae3fab7d1e6ecf3188a3303e1e9315d4418e2d0043eeb22b5b"} Nov 29 07:37:41 crc kubenswrapper[4660]: E1129 07:37:41.716016 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-gcb27" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.042856 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.211601 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run\") pod \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.211705 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-scripts\") pod \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.211775 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-additional-scripts\") pod \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.211803 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run" (OuterVolumeSpecName: "var-run") pod "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" (UID: "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.211897 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" (UID: "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.211847 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run-ovn\") pod \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.212006 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzv8k\" (UniqueName: \"kubernetes.io/projected/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-kube-api-access-fzv8k\") pod \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.212039 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-log-ovn\") pod \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\" (UID: \"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a\") " Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.212362 4660 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.212374 4660 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.212396 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" (UID: "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.213379 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" (UID: "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.213591 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-scripts" (OuterVolumeSpecName: "scripts") pod "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" (UID: "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.224042 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-kube-api-access-fzv8k" (OuterVolumeSpecName: "kube-api-access-fzv8k") pod "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" (UID: "2ea2cee2-2b81-45a0-a56f-0a4099e4d08a"). InnerVolumeSpecName "kube-api-access-fzv8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.314672 4660 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.314916 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzv8k\" (UniqueName: \"kubernetes.io/projected/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-kube-api-access-fzv8k\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.315005 4660 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.315065 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.714822 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26-config-5sfg8" event={"ID":"2ea2cee2-2b81-45a0-a56f-0a4099e4d08a","Type":"ContainerDied","Data":"50cf0c3f3935ca2229667f1f3a3ba38d0fa356e89a6e960f9b7ffd5906d6027f"} Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.715135 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50cf0c3f3935ca2229667f1f3a3ba38d0fa356e89a6e960f9b7ffd5906d6027f" Nov 29 07:37:43 crc kubenswrapper[4660]: I1129 07:37:43.715102 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-5sfg8" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.204105 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xdz26-config-5sfg8"] Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.213825 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xdz26-config-5sfg8"] Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.280393 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xdz26-config-j8hcn"] Nov 29 07:37:44 crc kubenswrapper[4660]: E1129 07:37:44.280830 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" containerName="ovn-config" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.280854 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" containerName="ovn-config" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.281061 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" containerName="ovn-config" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.281774 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.284032 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.297160 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xdz26-config-j8hcn"] Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.392245 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xdz26" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.440132 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-additional-scripts\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.440171 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.440204 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kbj\" (UniqueName: \"kubernetes.io/projected/dbf57c5f-4fbd-4b54-a617-74b24987129d-kube-api-access-w6kbj\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.440243 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-scripts\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.440265 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-log-ovn\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.440307 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run-ovn\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.541064 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run-ovn\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.541156 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-additional-scripts\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.541175 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.541215 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6kbj\" (UniqueName: \"kubernetes.io/projected/dbf57c5f-4fbd-4b54-a617-74b24987129d-kube-api-access-w6kbj\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.541257 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-scripts\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.541280 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-log-ovn\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.542001 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run-ovn\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.542075 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-log-ovn\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.542081 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.542128 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-additional-scripts\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.543793 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-scripts\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.559067 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6kbj\" (UniqueName: \"kubernetes.io/projected/dbf57c5f-4fbd-4b54-a617-74b24987129d-kube-api-access-w6kbj\") pod \"ovn-controller-xdz26-config-j8hcn\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:44 crc kubenswrapper[4660]: I1129 07:37:44.599160 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:45 crc kubenswrapper[4660]: I1129 07:37:45.721164 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea2cee2-2b81-45a0-a56f-0a4099e4d08a" path="/var/lib/kubelet/pods/2ea2cee2-2b81-45a0-a56f-0a4099e4d08a/volumes" Nov 29 07:37:47 crc kubenswrapper[4660]: I1129 07:37:47.145193 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xdz26-config-j8hcn"] Nov 29 07:37:47 crc kubenswrapper[4660]: I1129 07:37:47.763483 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26-config-j8hcn" event={"ID":"dbf57c5f-4fbd-4b54-a617-74b24987129d","Type":"ContainerStarted","Data":"7a311b1199350fd20f9a4459595aac94e27169d61e06f144eea38f8debe6f6d5"} Nov 29 07:37:49 crc kubenswrapper[4660]: I1129 07:37:49.146018 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 29 07:37:49 crc kubenswrapper[4660]: I1129 07:37:49.792015 4660 generic.go:334] "Generic (PLEG): container finished" podID="dbf57c5f-4fbd-4b54-a617-74b24987129d" containerID="59ccd31e20601580f9b94f1b845b510b131262eaa48534a26444589237b64395" exitCode=0 Nov 29 07:37:49 crc kubenswrapper[4660]: I1129 07:37:49.792390 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26-config-j8hcn" event={"ID":"dbf57c5f-4fbd-4b54-a617-74b24987129d","Type":"ContainerDied","Data":"59ccd31e20601580f9b94f1b845b510b131262eaa48534a26444589237b64395"} Nov 29 07:37:49 crc kubenswrapper[4660]: I1129 07:37:49.807603 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.404898 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572335 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6kbj\" (UniqueName: \"kubernetes.io/projected/dbf57c5f-4fbd-4b54-a617-74b24987129d-kube-api-access-w6kbj\") pod \"dbf57c5f-4fbd-4b54-a617-74b24987129d\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572414 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run\") pod \"dbf57c5f-4fbd-4b54-a617-74b24987129d\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572441 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-log-ovn\") pod \"dbf57c5f-4fbd-4b54-a617-74b24987129d\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572499 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run-ovn\") pod \"dbf57c5f-4fbd-4b54-a617-74b24987129d\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572520 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-additional-scripts\") pod \"dbf57c5f-4fbd-4b54-a617-74b24987129d\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572485 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run" (OuterVolumeSpecName: "var-run") pod "dbf57c5f-4fbd-4b54-a617-74b24987129d" (UID: "dbf57c5f-4fbd-4b54-a617-74b24987129d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572495 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "dbf57c5f-4fbd-4b54-a617-74b24987129d" (UID: "dbf57c5f-4fbd-4b54-a617-74b24987129d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572556 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-scripts\") pod \"dbf57c5f-4fbd-4b54-a617-74b24987129d\" (UID: \"dbf57c5f-4fbd-4b54-a617-74b24987129d\") " Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.572543 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "dbf57c5f-4fbd-4b54-a617-74b24987129d" (UID: "dbf57c5f-4fbd-4b54-a617-74b24987129d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.573181 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "dbf57c5f-4fbd-4b54-a617-74b24987129d" (UID: "dbf57c5f-4fbd-4b54-a617-74b24987129d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.573289 4660 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.573318 4660 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.573330 4660 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/dbf57c5f-4fbd-4b54-a617-74b24987129d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.573354 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-scripts" (OuterVolumeSpecName: "scripts") pod "dbf57c5f-4fbd-4b54-a617-74b24987129d" (UID: "dbf57c5f-4fbd-4b54-a617-74b24987129d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.579927 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf57c5f-4fbd-4b54-a617-74b24987129d-kube-api-access-w6kbj" (OuterVolumeSpecName: "kube-api-access-w6kbj") pod "dbf57c5f-4fbd-4b54-a617-74b24987129d" (UID: "dbf57c5f-4fbd-4b54-a617-74b24987129d"). InnerVolumeSpecName "kube-api-access-w6kbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.674856 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.674896 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6kbj\" (UniqueName: \"kubernetes.io/projected/dbf57c5f-4fbd-4b54-a617-74b24987129d-kube-api-access-w6kbj\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.674910 4660 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/dbf57c5f-4fbd-4b54-a617-74b24987129d-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.811601 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xdz26-config-j8hcn" event={"ID":"dbf57c5f-4fbd-4b54-a617-74b24987129d","Type":"ContainerDied","Data":"7a311b1199350fd20f9a4459595aac94e27169d61e06f144eea38f8debe6f6d5"} Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.811659 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xdz26-config-j8hcn" Nov 29 07:37:51 crc kubenswrapper[4660]: I1129 07:37:51.811676 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a311b1199350fd20f9a4459595aac94e27169d61e06f144eea38f8debe6f6d5" Nov 29 07:37:52 crc kubenswrapper[4660]: I1129 07:37:52.506165 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xdz26-config-j8hcn"] Nov 29 07:37:52 crc kubenswrapper[4660]: I1129 07:37:52.513829 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xdz26-config-j8hcn"] Nov 29 07:37:52 crc kubenswrapper[4660]: I1129 07:37:52.819681 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"eb34a94d6374932e1bf19187a2542bd93c081f7265a9db4d34157fae83a40b23"} Nov 29 07:37:52 crc kubenswrapper[4660]: I1129 07:37:52.819728 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"70d9dde8aaf94d3d4169e504e2a6494771923e40a62181623ccd3a27a891435f"} Nov 29 07:37:52 crc kubenswrapper[4660]: I1129 07:37:52.819745 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"9d1d4525687b82c2317e38581d9f61a7e7d1c7124031de290eb249f7defb5f25"} Nov 29 07:37:52 crc kubenswrapper[4660]: I1129 07:37:52.819757 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"dbb699bd28c9042d17dcdbb92b8600292ef53aa07b0266bc9f4492381fba132e"} Nov 29 07:37:53 crc kubenswrapper[4660]: I1129 07:37:53.727000 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf57c5f-4fbd-4b54-a617-74b24987129d" path="/var/lib/kubelet/pods/dbf57c5f-4fbd-4b54-a617-74b24987129d/volumes" Nov 29 07:37:53 crc kubenswrapper[4660]: I1129 07:37:53.830751 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"4a8607dccdfa14d7ce5aaac58e5d74ef80e1b390c5ebcd2a61fe21e9364a7570"} Nov 29 07:37:54 crc kubenswrapper[4660]: I1129 07:37:54.842390 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"fff3f8ee4a8741db7f18fe6c03587acc3c6b58ccc1727968f9bcfdf06bec0566"} Nov 29 07:37:54 crc kubenswrapper[4660]: I1129 07:37:54.842744 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"be94a83991a958b0d347bbe4c4455a0df7e8de304b11dd44fdc802dbd226a2d8"} Nov 29 07:37:54 crc kubenswrapper[4660]: I1129 07:37:54.842759 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"7254a74494b184090a554828a02cf7e7ba58c7d2678df58e4b5141170418fea8"} Nov 29 07:37:56 crc kubenswrapper[4660]: I1129 07:37:56.858038 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"3165570a84402d1acb816551cd1d4594ccb658968c82e1090743ae47a8916096"} Nov 29 07:37:56 crc kubenswrapper[4660]: I1129 07:37:56.858539 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"92e86afa8aae831bf6586e74cac5457c7d86ace16a42e9ecbfc735efd441acab"} Nov 29 07:37:56 crc kubenswrapper[4660]: I1129 07:37:56.858552 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"a1a88fd1f0028400213a22d8230a2d0136e6a9bbfaf45cf21c24f33261cf562a"} Nov 29 07:37:56 crc kubenswrapper[4660]: I1129 07:37:56.858563 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"c94383c4f03118935c889ffd91126cb5c94f9625b79bb193395bf3589598c1a6"} Nov 29 07:37:57 crc kubenswrapper[4660]: I1129 07:37:57.887403 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"96b39db2893a31b1148dd5af8dfce485c89424beef0a0e10acced02afc816c07"} Nov 29 07:37:57 crc kubenswrapper[4660]: I1129 07:37:57.887452 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"bffdca74b818ffcc19db183de790ea7f744106de0347ae37d09707ee9fb223e2"} Nov 29 07:37:57 crc kubenswrapper[4660]: I1129 07:37:57.887467 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1875d22e-2809-4d96-9cb9-bac77320c5a3","Type":"ContainerStarted","Data":"019183cb088c18c5a6069a34c08c7eff7c03d2b5c2e465d94dd7ce6b1a7fc190"} Nov 29 07:37:57 crc kubenswrapper[4660]: I1129 07:37:57.936822 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.912980229 podStartE2EDuration="53.936797437s" podCreationTimestamp="2025-11-29 07:37:04 +0000 UTC" firstStartedPulling="2025-11-29 07:37:38.609107434 +0000 UTC m=+1349.162637333" lastFinishedPulling="2025-11-29 07:37:55.632924642 +0000 UTC m=+1366.186454541" observedRunningTime="2025-11-29 07:37:57.930253706 +0000 UTC m=+1368.483783645" watchObservedRunningTime="2025-11-29 07:37:57.936797437 +0000 UTC m=+1368.490327336" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.198165 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z265m"] Nov 29 07:37:58 crc kubenswrapper[4660]: E1129 07:37:58.198484 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf57c5f-4fbd-4b54-a617-74b24987129d" containerName="ovn-config" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.198501 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf57c5f-4fbd-4b54-a617-74b24987129d" containerName="ovn-config" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.198696 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf57c5f-4fbd-4b54-a617-74b24987129d" containerName="ovn-config" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.199586 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.205340 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.237770 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z265m"] Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.304569 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw2wd\" (UniqueName: \"kubernetes.io/projected/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-kube-api-access-pw2wd\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.304831 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.304951 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.305016 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.305210 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.305371 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-config\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.407221 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.407546 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.407589 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.407666 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.407700 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-config\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.408519 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw2wd\" (UniqueName: \"kubernetes.io/projected/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-kube-api-access-pw2wd\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.408600 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.408730 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-config\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.408749 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.409498 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.409535 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.429920 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw2wd\" (UniqueName: \"kubernetes.io/projected/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-kube-api-access-pw2wd\") pod \"dnsmasq-dns-5c79d794d7-z265m\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:58 crc kubenswrapper[4660]: I1129 07:37:58.520290 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.015442 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z265m"] Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.154197 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.808902 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.811800 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mpdjp"] Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.812716 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mpdjp" Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.852012 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mpdjp"] Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.916064 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" event={"ID":"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f","Type":"ContainerStarted","Data":"b2c6dec7fb3e4eaf901045116c5b496d9f3114b526fcf2591748afa0142ba1a9"} Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.944813 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1db5943-0f0c-4b0f-827d-297cf4773210-operator-scripts\") pod \"cinder-db-create-mpdjp\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " pod="openstack/cinder-db-create-mpdjp" Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.944873 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2j9m\" (UniqueName: \"kubernetes.io/projected/b1db5943-0f0c-4b0f-827d-297cf4773210-kube-api-access-g2j9m\") pod \"cinder-db-create-mpdjp\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " pod="openstack/cinder-db-create-mpdjp" Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.983281 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-fm77f"] Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.984860 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fm77f" Nov 29 07:37:59 crc kubenswrapper[4660]: I1129 07:37:59.996194 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fm77f"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.017601 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-99db-account-create-update-6xmmv"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.028505 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.037070 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.048156 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1db5943-0f0c-4b0f-827d-297cf4773210-operator-scripts\") pod \"cinder-db-create-mpdjp\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " pod="openstack/cinder-db-create-mpdjp" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.048210 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2j9m\" (UniqueName: \"kubernetes.io/projected/b1db5943-0f0c-4b0f-827d-297cf4773210-kube-api-access-g2j9m\") pod \"cinder-db-create-mpdjp\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " pod="openstack/cinder-db-create-mpdjp" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.056168 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1db5943-0f0c-4b0f-827d-297cf4773210-operator-scripts\") pod \"cinder-db-create-mpdjp\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " pod="openstack/cinder-db-create-mpdjp" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.072344 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-99db-account-create-update-6xmmv"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.105464 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4d27-account-create-update-sbsxh"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.108987 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.120529 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.159253 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2j9m\" (UniqueName: \"kubernetes.io/projected/b1db5943-0f0c-4b0f-827d-297cf4773210-kube-api-access-g2j9m\") pod \"cinder-db-create-mpdjp\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " pod="openstack/cinder-db-create-mpdjp" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.165310 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrrfm\" (UniqueName: \"kubernetes.io/projected/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-kube-api-access-xrrfm\") pod \"cinder-99db-account-create-update-6xmmv\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.165409 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7173830-627e-4f39-b843-5ced3d7b5efa-operator-scripts\") pod \"barbican-db-create-fm77f\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.165599 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-operator-scripts\") pod \"cinder-99db-account-create-update-6xmmv\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.165809 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftkdj\" (UniqueName: \"kubernetes.io/projected/e7173830-627e-4f39-b843-5ced3d7b5efa-kube-api-access-ftkdj\") pod \"barbican-db-create-fm77f\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.166083 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4d27-account-create-update-sbsxh"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.166361 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mpdjp" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.265984 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dwvvj"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.267006 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.267204 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7173830-627e-4f39-b843-5ced3d7b5efa-operator-scripts\") pod \"barbican-db-create-fm77f\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.267253 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-operator-scripts\") pod \"cinder-99db-account-create-update-6xmmv\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.267297 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftkdj\" (UniqueName: \"kubernetes.io/projected/e7173830-627e-4f39-b843-5ced3d7b5efa-kube-api-access-ftkdj\") pod \"barbican-db-create-fm77f\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.267319 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85987bf0-2e03-4db1-a740-0184d42b2540-operator-scripts\") pod \"barbican-4d27-account-create-update-sbsxh\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.267359 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrrfm\" (UniqueName: \"kubernetes.io/projected/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-kube-api-access-xrrfm\") pod \"cinder-99db-account-create-update-6xmmv\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.267438 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtc5f\" (UniqueName: \"kubernetes.io/projected/85987bf0-2e03-4db1-a740-0184d42b2540-kube-api-access-xtc5f\") pod \"barbican-4d27-account-create-update-sbsxh\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.270010 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-operator-scripts\") pod \"cinder-99db-account-create-update-6xmmv\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.281322 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7173830-627e-4f39-b843-5ced3d7b5efa-operator-scripts\") pod \"barbican-db-create-fm77f\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.283885 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dwvvj"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.316728 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrrfm\" (UniqueName: \"kubernetes.io/projected/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-kube-api-access-xrrfm\") pod \"cinder-99db-account-create-update-6xmmv\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.328470 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftkdj\" (UniqueName: \"kubernetes.io/projected/e7173830-627e-4f39-b843-5ced3d7b5efa-kube-api-access-ftkdj\") pod \"barbican-db-create-fm77f\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.357344 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.373064 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85987bf0-2e03-4db1-a740-0184d42b2540-operator-scripts\") pod \"barbican-4d27-account-create-update-sbsxh\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.373109 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-operator-scripts\") pod \"neutron-db-create-dwvvj\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.373169 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtc5f\" (UniqueName: \"kubernetes.io/projected/85987bf0-2e03-4db1-a740-0184d42b2540-kube-api-access-xtc5f\") pod \"barbican-4d27-account-create-update-sbsxh\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.373213 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6nc\" (UniqueName: \"kubernetes.io/projected/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-kube-api-access-kj6nc\") pod \"neutron-db-create-dwvvj\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.374263 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85987bf0-2e03-4db1-a740-0184d42b2540-operator-scripts\") pod \"barbican-4d27-account-create-update-sbsxh\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.377585 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.394583 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtc5f\" (UniqueName: \"kubernetes.io/projected/85987bf0-2e03-4db1-a740-0184d42b2540-kube-api-access-xtc5f\") pod \"barbican-4d27-account-create-update-sbsxh\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.479930 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-operator-scripts\") pod \"neutron-db-create-dwvvj\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.480010 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6nc\" (UniqueName: \"kubernetes.io/projected/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-kube-api-access-kj6nc\") pod \"neutron-db-create-dwvvj\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.480765 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-operator-scripts\") pod \"neutron-db-create-dwvvj\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.497177 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5ac6-account-create-update-sqpwx"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.498286 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.504413 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.529369 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ac6-account-create-update-sqpwx"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.530412 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6nc\" (UniqueName: \"kubernetes.io/projected/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-kube-api-access-kj6nc\") pod \"neutron-db-create-dwvvj\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.581571 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.581966 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hscn\" (UniqueName: \"kubernetes.io/projected/68cb8551-52db-4811-ab40-fa77d412662a-kube-api-access-8hscn\") pod \"neutron-5ac6-account-create-update-sqpwx\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.582054 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68cb8551-52db-4811-ab40-fa77d412662a-operator-scripts\") pod \"neutron-5ac6-account-create-update-sqpwx\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.606442 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mpdjp"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.653723 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.687694 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hscn\" (UniqueName: \"kubernetes.io/projected/68cb8551-52db-4811-ab40-fa77d412662a-kube-api-access-8hscn\") pod \"neutron-5ac6-account-create-update-sqpwx\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.687803 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68cb8551-52db-4811-ab40-fa77d412662a-operator-scripts\") pod \"neutron-5ac6-account-create-update-sqpwx\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.688495 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68cb8551-52db-4811-ab40-fa77d412662a-operator-scripts\") pod \"neutron-5ac6-account-create-update-sqpwx\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.701887 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vnbp6"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.706126 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.709964 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.710185 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.710296 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvsdd" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.713552 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.763192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hscn\" (UniqueName: \"kubernetes.io/projected/68cb8551-52db-4811-ab40-fa77d412662a-kube-api-access-8hscn\") pod \"neutron-5ac6-account-create-update-sqpwx\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.779121 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vnbp6"] Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.907820 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqf2\" (UniqueName: \"kubernetes.io/projected/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-kube-api-access-knqf2\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.907898 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-combined-ca-bundle\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.907936 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-config-data\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.943986 4660 generic.go:334] "Generic (PLEG): container finished" podID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerID="e478e9c8920e067c72f84f2159aa1ca2bc398fd2dba37679d3e6c48d8ecca44a" exitCode=0 Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.944073 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" event={"ID":"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f","Type":"ContainerDied","Data":"e478e9c8920e067c72f84f2159aa1ca2bc398fd2dba37679d3e6c48d8ecca44a"} Nov 29 07:38:00 crc kubenswrapper[4660]: I1129 07:38:00.948884 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mpdjp" event={"ID":"b1db5943-0f0c-4b0f-827d-297cf4773210","Type":"ContainerStarted","Data":"a84aa7b263818f6b0d8cc02b711da5f061974dfec2e96582b260a43944568d67"} Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.008803 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-config-data\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.009105 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knqf2\" (UniqueName: \"kubernetes.io/projected/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-kube-api-access-knqf2\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.009141 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-combined-ca-bundle\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.014723 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-combined-ca-bundle\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.015091 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-config-data\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.020530 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.055246 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqf2\" (UniqueName: \"kubernetes.io/projected/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-kube-api-access-knqf2\") pod \"keystone-db-sync-vnbp6\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.241947 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-99db-account-create-update-6xmmv"] Nov 29 07:38:01 crc kubenswrapper[4660]: W1129 07:38:01.283835 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5baff9b_19db_4ae1_b016_dce7f4f84f1a.slice/crio-ceb2e800dd5a2257baebbdedc25e1327bd3f214b5788532b2573f91ac625baab WatchSource:0}: Error finding container ceb2e800dd5a2257baebbdedc25e1327bd3f214b5788532b2573f91ac625baab: Status 404 returned error can't find the container with id ceb2e800dd5a2257baebbdedc25e1327bd3f214b5788532b2573f91ac625baab Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.341003 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.424798 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4d27-account-create-update-sbsxh"] Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.465394 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fm77f"] Nov 29 07:38:01 crc kubenswrapper[4660]: W1129 07:38:01.496309 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7173830_627e_4f39_b843_5ced3d7b5efa.slice/crio-599630bcaaad415d7bd246ab7dd670e9778e85fee2d960c4f47615ccbbe2839c WatchSource:0}: Error finding container 599630bcaaad415d7bd246ab7dd670e9778e85fee2d960c4f47615ccbbe2839c: Status 404 returned error can't find the container with id 599630bcaaad415d7bd246ab7dd670e9778e85fee2d960c4f47615ccbbe2839c Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.619759 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dwvvj"] Nov 29 07:38:01 crc kubenswrapper[4660]: W1129 07:38:01.651582 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fb6ebf7_7e64_4470_b468_2ce0f1a0bd8c.slice/crio-c8f70b2c74db4b005acbfbec7e258f98f798f2d32565fa6aa52dfaa29ae2f618 WatchSource:0}: Error finding container c8f70b2c74db4b005acbfbec7e258f98f798f2d32565fa6aa52dfaa29ae2f618: Status 404 returned error can't find the container with id c8f70b2c74db4b005acbfbec7e258f98f798f2d32565fa6aa52dfaa29ae2f618 Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.666167 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ac6-account-create-update-sqpwx"] Nov 29 07:38:01 crc kubenswrapper[4660]: W1129 07:38:01.668144 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68cb8551_52db_4811_ab40_fa77d412662a.slice/crio-5119c62860de0298ed04d1794ce1e2edda2f250f644452a5c8f8f9d4ef09da65 WatchSource:0}: Error finding container 5119c62860de0298ed04d1794ce1e2edda2f250f644452a5c8f8f9d4ef09da65: Status 404 returned error can't find the container with id 5119c62860de0298ed04d1794ce1e2edda2f250f644452a5c8f8f9d4ef09da65 Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.897600 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vnbp6"] Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.983047 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4d27-account-create-update-sbsxh" event={"ID":"85987bf0-2e03-4db1-a740-0184d42b2540","Type":"ContainerStarted","Data":"368720d6e2abf6352b18960794b6a5977a4a3ab6a05f2a344fbd2da4a79a5582"} Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.992188 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-99db-account-create-update-6xmmv" event={"ID":"c5baff9b-19db-4ae1-b016-dce7f4f84f1a","Type":"ContainerStarted","Data":"ceb2e800dd5a2257baebbdedc25e1327bd3f214b5788532b2573f91ac625baab"} Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.995869 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" event={"ID":"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f","Type":"ContainerStarted","Data":"7e109ebefa2a2c56a60bc1689cddf515bbb0ab72f61e7d44b92bafd07838e73e"} Nov 29 07:38:01 crc kubenswrapper[4660]: I1129 07:38:01.995923 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:38:02 crc kubenswrapper[4660]: I1129 07:38:02.000904 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mpdjp" event={"ID":"b1db5943-0f0c-4b0f-827d-297cf4773210","Type":"ContainerStarted","Data":"fdb87d30d273f9ce0caa15fe2cbeb98854483dfc1c6edafc9c01d44589349a7e"} Nov 29 07:38:02 crc kubenswrapper[4660]: I1129 07:38:02.012860 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fm77f" event={"ID":"e7173830-627e-4f39-b843-5ced3d7b5efa","Type":"ContainerStarted","Data":"599630bcaaad415d7bd246ab7dd670e9778e85fee2d960c4f47615ccbbe2839c"} Nov 29 07:38:02 crc kubenswrapper[4660]: I1129 07:38:02.017474 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vnbp6" event={"ID":"d1e6819d-5ed0-4388-a7da-8d22e20ad10c","Type":"ContainerStarted","Data":"25daafc8db718665e928409c96f808e1e3ebc5aac948b61a01931dbf3a55be48"} Nov 29 07:38:02 crc kubenswrapper[4660]: I1129 07:38:02.019083 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ac6-account-create-update-sqpwx" event={"ID":"68cb8551-52db-4811-ab40-fa77d412662a","Type":"ContainerStarted","Data":"5119c62860de0298ed04d1794ce1e2edda2f250f644452a5c8f8f9d4ef09da65"} Nov 29 07:38:02 crc kubenswrapper[4660]: I1129 07:38:02.025381 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dwvvj" event={"ID":"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c","Type":"ContainerStarted","Data":"c8f70b2c74db4b005acbfbec7e258f98f798f2d32565fa6aa52dfaa29ae2f618"} Nov 29 07:38:02 crc kubenswrapper[4660]: I1129 07:38:02.046588 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podStartSLOduration=4.046570108 podStartE2EDuration="4.046570108s" podCreationTimestamp="2025-11-29 07:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:02.025019854 +0000 UTC m=+1372.578549763" watchObservedRunningTime="2025-11-29 07:38:02.046570108 +0000 UTC m=+1372.600100007" Nov 29 07:38:02 crc kubenswrapper[4660]: I1129 07:38:02.049193 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-mpdjp" podStartSLOduration=3.049173259 podStartE2EDuration="3.049173259s" podCreationTimestamp="2025-11-29 07:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:02.046325921 +0000 UTC m=+1372.599855820" watchObservedRunningTime="2025-11-29 07:38:02.049173259 +0000 UTC m=+1372.602703158" Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.037774 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dwvvj" event={"ID":"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c","Type":"ContainerStarted","Data":"897fc2d69e05d4e9fad535d2c9567897e246ca10506442de808f635e602e2616"} Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.039401 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4d27-account-create-update-sbsxh" event={"ID":"85987bf0-2e03-4db1-a740-0184d42b2540","Type":"ContainerStarted","Data":"ea923b87a60c042407e01907eb13ad4896ad2126a7d19a327bb86f8d6f016d1f"} Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.041926 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-99db-account-create-update-6xmmv" event={"ID":"c5baff9b-19db-4ae1-b016-dce7f4f84f1a","Type":"ContainerStarted","Data":"f2a60c3558d4aa3df89f3c7ac83324c6e746107500b954b29e6b569d1ea5aa5f"} Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.043789 4660 generic.go:334] "Generic (PLEG): container finished" podID="b1db5943-0f0c-4b0f-827d-297cf4773210" containerID="fdb87d30d273f9ce0caa15fe2cbeb98854483dfc1c6edafc9c01d44589349a7e" exitCode=0 Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.043854 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mpdjp" event={"ID":"b1db5943-0f0c-4b0f-827d-297cf4773210","Type":"ContainerDied","Data":"fdb87d30d273f9ce0caa15fe2cbeb98854483dfc1c6edafc9c01d44589349a7e"} Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.045794 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fm77f" event={"ID":"e7173830-627e-4f39-b843-5ced3d7b5efa","Type":"ContainerStarted","Data":"873d51bc11ae1e02f259575ba5348581b1ed4c53753ab5429caaa068bb45c059"} Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.047319 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ac6-account-create-update-sqpwx" event={"ID":"68cb8551-52db-4811-ab40-fa77d412662a","Type":"ContainerStarted","Data":"e947b4446c70333c504ff836efad20d9ae39a6d5a04e55f81a97c909e5897f20"} Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.059830 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-dwvvj" podStartSLOduration=3.059808187 podStartE2EDuration="3.059808187s" podCreationTimestamp="2025-11-29 07:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:03.051696344 +0000 UTC m=+1373.605226243" watchObservedRunningTime="2025-11-29 07:38:03.059808187 +0000 UTC m=+1373.613338086" Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.072047 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-99db-account-create-update-6xmmv" podStartSLOduration=4.072030833 podStartE2EDuration="4.072030833s" podCreationTimestamp="2025-11-29 07:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:03.068427554 +0000 UTC m=+1373.621957453" watchObservedRunningTime="2025-11-29 07:38:03.072030833 +0000 UTC m=+1373.625560732" Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.094472 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5ac6-account-create-update-sqpwx" podStartSLOduration=3.09445625 podStartE2EDuration="3.09445625s" podCreationTimestamp="2025-11-29 07:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:03.088865616 +0000 UTC m=+1373.642395505" watchObservedRunningTime="2025-11-29 07:38:03.09445625 +0000 UTC m=+1373.647986149" Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.136892 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-fm77f" podStartSLOduration=4.136875489 podStartE2EDuration="4.136875489s" podCreationTimestamp="2025-11-29 07:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:03.136060056 +0000 UTC m=+1373.689589955" watchObservedRunningTime="2025-11-29 07:38:03.136875489 +0000 UTC m=+1373.690405388" Nov 29 07:38:03 crc kubenswrapper[4660]: I1129 07:38:03.141310 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4d27-account-create-update-sbsxh" podStartSLOduration=3.14130288 podStartE2EDuration="3.14130288s" podCreationTimestamp="2025-11-29 07:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:03.12641505 +0000 UTC m=+1373.679944949" watchObservedRunningTime="2025-11-29 07:38:03.14130288 +0000 UTC m=+1373.694832779" Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.070117 4660 generic.go:334] "Generic (PLEG): container finished" podID="85987bf0-2e03-4db1-a740-0184d42b2540" containerID="ea923b87a60c042407e01907eb13ad4896ad2126a7d19a327bb86f8d6f016d1f" exitCode=0 Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.070195 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4d27-account-create-update-sbsxh" event={"ID":"85987bf0-2e03-4db1-a740-0184d42b2540","Type":"ContainerDied","Data":"ea923b87a60c042407e01907eb13ad4896ad2126a7d19a327bb86f8d6f016d1f"} Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.074145 4660 generic.go:334] "Generic (PLEG): container finished" podID="c5baff9b-19db-4ae1-b016-dce7f4f84f1a" containerID="f2a60c3558d4aa3df89f3c7ac83324c6e746107500b954b29e6b569d1ea5aa5f" exitCode=0 Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.074197 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-99db-account-create-update-6xmmv" event={"ID":"c5baff9b-19db-4ae1-b016-dce7f4f84f1a","Type":"ContainerDied","Data":"f2a60c3558d4aa3df89f3c7ac83324c6e746107500b954b29e6b569d1ea5aa5f"} Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.075567 4660 generic.go:334] "Generic (PLEG): container finished" podID="e7173830-627e-4f39-b843-5ced3d7b5efa" containerID="873d51bc11ae1e02f259575ba5348581b1ed4c53753ab5429caaa068bb45c059" exitCode=0 Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.075626 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fm77f" event={"ID":"e7173830-627e-4f39-b843-5ced3d7b5efa","Type":"ContainerDied","Data":"873d51bc11ae1e02f259575ba5348581b1ed4c53753ab5429caaa068bb45c059"} Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.077201 4660 generic.go:334] "Generic (PLEG): container finished" podID="68cb8551-52db-4811-ab40-fa77d412662a" containerID="e947b4446c70333c504ff836efad20d9ae39a6d5a04e55f81a97c909e5897f20" exitCode=0 Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.077252 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ac6-account-create-update-sqpwx" event={"ID":"68cb8551-52db-4811-ab40-fa77d412662a","Type":"ContainerDied","Data":"e947b4446c70333c504ff836efad20d9ae39a6d5a04e55f81a97c909e5897f20"} Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.078425 4660 generic.go:334] "Generic (PLEG): container finished" podID="1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c" containerID="897fc2d69e05d4e9fad535d2c9567897e246ca10506442de808f635e602e2616" exitCode=0 Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.078468 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dwvvj" event={"ID":"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c","Type":"ContainerDied","Data":"897fc2d69e05d4e9fad535d2c9567897e246ca10506442de808f635e602e2616"} Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.500564 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:38:05 crc kubenswrapper[4660]: I1129 07:38:05.500654 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:38:08 crc kubenswrapper[4660]: I1129 07:38:08.521855 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:38:08 crc kubenswrapper[4660]: I1129 07:38:08.589153 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-k95b5"] Nov 29 07:38:08 crc kubenswrapper[4660]: I1129 07:38:08.592129 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="dnsmasq-dns" containerID="cri-o://12e4b3443a159c7a4d826e7b277a52f684976e6d52cf0aad9be435eebb152806" gracePeriod=10 Nov 29 07:38:10 crc kubenswrapper[4660]: I1129 07:38:10.184685 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 29 07:38:11 crc kubenswrapper[4660]: I1129 07:38:11.134050 4660 generic.go:334] "Generic (PLEG): container finished" podID="e1d28983-0802-4233-b388-506681c95edd" containerID="12e4b3443a159c7a4d826e7b277a52f684976e6d52cf0aad9be435eebb152806" exitCode=0 Nov 29 07:38:11 crc kubenswrapper[4660]: I1129 07:38:11.134135 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" event={"ID":"e1d28983-0802-4233-b388-506681c95edd","Type":"ContainerDied","Data":"12e4b3443a159c7a4d826e7b277a52f684976e6d52cf0aad9be435eebb152806"} Nov 29 07:38:14 crc kubenswrapper[4660]: E1129 07:38:14.582976 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 29 07:38:14 crc kubenswrapper[4660]: E1129 07:38:14.583513 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm5xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gcb27_openstack(0a4ba5b4-3360-458f-8de9-6c0630ad7cbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:38:14 crc kubenswrapper[4660]: E1129 07:38:14.585826 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-gcb27" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" Nov 29 07:38:15 crc kubenswrapper[4660]: I1129 07:38:15.185287 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.530477 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.541679 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.547580 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-operator-scripts\") pod \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.548073 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5baff9b-19db-4ae1-b016-dce7f4f84f1a" (UID: "c5baff9b-19db-4ae1-b016-dce7f4f84f1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.548688 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c" (UID: "1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.548334 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-operator-scripts\") pod \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.549745 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrrfm\" (UniqueName: \"kubernetes.io/projected/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-kube-api-access-xrrfm\") pod \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\" (UID: \"c5baff9b-19db-4ae1-b016-dce7f4f84f1a\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.550858 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj6nc\" (UniqueName: \"kubernetes.io/projected/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-kube-api-access-kj6nc\") pod \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\" (UID: \"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.553635 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.553969 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.562709 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-kube-api-access-kj6nc" (OuterVolumeSpecName: "kube-api-access-kj6nc") pod "1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c" (UID: "1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c"). InnerVolumeSpecName "kube-api-access-kj6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.570194 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-kube-api-access-xrrfm" (OuterVolumeSpecName: "kube-api-access-xrrfm") pod "c5baff9b-19db-4ae1-b016-dce7f4f84f1a" (UID: "c5baff9b-19db-4ae1-b016-dce7f4f84f1a"). InnerVolumeSpecName "kube-api-access-xrrfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: E1129 07:38:18.601698 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 29 07:38:18 crc kubenswrapper[4660]: E1129 07:38:18.601853 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knqf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-vnbp6_openstack(d1e6819d-5ed0-4388-a7da-8d22e20ad10c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:38:18 crc kubenswrapper[4660]: E1129 07:38:18.603180 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-vnbp6" podUID="d1e6819d-5ed0-4388-a7da-8d22e20ad10c" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.610958 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.618691 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.639587 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mpdjp" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.655136 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68cb8551-52db-4811-ab40-fa77d412662a-operator-scripts\") pod \"68cb8551-52db-4811-ab40-fa77d412662a\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.655284 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtc5f\" (UniqueName: \"kubernetes.io/projected/85987bf0-2e03-4db1-a740-0184d42b2540-kube-api-access-xtc5f\") pod \"85987bf0-2e03-4db1-a740-0184d42b2540\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.655425 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85987bf0-2e03-4db1-a740-0184d42b2540-operator-scripts\") pod \"85987bf0-2e03-4db1-a740-0184d42b2540\" (UID: \"85987bf0-2e03-4db1-a740-0184d42b2540\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.655502 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2j9m\" (UniqueName: \"kubernetes.io/projected/b1db5943-0f0c-4b0f-827d-297cf4773210-kube-api-access-g2j9m\") pod \"b1db5943-0f0c-4b0f-827d-297cf4773210\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.655588 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hscn\" (UniqueName: \"kubernetes.io/projected/68cb8551-52db-4811-ab40-fa77d412662a-kube-api-access-8hscn\") pod \"68cb8551-52db-4811-ab40-fa77d412662a\" (UID: \"68cb8551-52db-4811-ab40-fa77d412662a\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.655641 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1db5943-0f0c-4b0f-827d-297cf4773210-operator-scripts\") pod \"b1db5943-0f0c-4b0f-827d-297cf4773210\" (UID: \"b1db5943-0f0c-4b0f-827d-297cf4773210\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.656245 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrrfm\" (UniqueName: \"kubernetes.io/projected/c5baff9b-19db-4ae1-b016-dce7f4f84f1a-kube-api-access-xrrfm\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.656302 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj6nc\" (UniqueName: \"kubernetes.io/projected/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c-kube-api-access-kj6nc\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.656645 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68cb8551-52db-4811-ab40-fa77d412662a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68cb8551-52db-4811-ab40-fa77d412662a" (UID: "68cb8551-52db-4811-ab40-fa77d412662a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.657198 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1db5943-0f0c-4b0f-827d-297cf4773210-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1db5943-0f0c-4b0f-827d-297cf4773210" (UID: "b1db5943-0f0c-4b0f-827d-297cf4773210"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.657231 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85987bf0-2e03-4db1-a740-0184d42b2540-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85987bf0-2e03-4db1-a740-0184d42b2540" (UID: "85987bf0-2e03-4db1-a740-0184d42b2540"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.657680 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.666639 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cb8551-52db-4811-ab40-fa77d412662a-kube-api-access-8hscn" (OuterVolumeSpecName: "kube-api-access-8hscn") pod "68cb8551-52db-4811-ab40-fa77d412662a" (UID: "68cb8551-52db-4811-ab40-fa77d412662a"). InnerVolumeSpecName "kube-api-access-8hscn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.670982 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85987bf0-2e03-4db1-a740-0184d42b2540-kube-api-access-xtc5f" (OuterVolumeSpecName: "kube-api-access-xtc5f") pod "85987bf0-2e03-4db1-a740-0184d42b2540" (UID: "85987bf0-2e03-4db1-a740-0184d42b2540"). InnerVolumeSpecName "kube-api-access-xtc5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.680465 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1db5943-0f0c-4b0f-827d-297cf4773210-kube-api-access-g2j9m" (OuterVolumeSpecName: "kube-api-access-g2j9m") pod "b1db5943-0f0c-4b0f-827d-297cf4773210" (UID: "b1db5943-0f0c-4b0f-827d-297cf4773210"). InnerVolumeSpecName "kube-api-access-g2j9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757194 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7173830-627e-4f39-b843-5ced3d7b5efa-operator-scripts\") pod \"e7173830-627e-4f39-b843-5ced3d7b5efa\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757299 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftkdj\" (UniqueName: \"kubernetes.io/projected/e7173830-627e-4f39-b843-5ced3d7b5efa-kube-api-access-ftkdj\") pod \"e7173830-627e-4f39-b843-5ced3d7b5efa\" (UID: \"e7173830-627e-4f39-b843-5ced3d7b5efa\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757843 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hscn\" (UniqueName: \"kubernetes.io/projected/68cb8551-52db-4811-ab40-fa77d412662a-kube-api-access-8hscn\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757865 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1db5943-0f0c-4b0f-827d-297cf4773210-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757878 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68cb8551-52db-4811-ab40-fa77d412662a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757892 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtc5f\" (UniqueName: \"kubernetes.io/projected/85987bf0-2e03-4db1-a740-0184d42b2540-kube-api-access-xtc5f\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757902 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85987bf0-2e03-4db1-a740-0184d42b2540-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.757912 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2j9m\" (UniqueName: \"kubernetes.io/projected/b1db5943-0f0c-4b0f-827d-297cf4773210-kube-api-access-g2j9m\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.761476 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7173830-627e-4f39-b843-5ced3d7b5efa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7173830-627e-4f39-b843-5ced3d7b5efa" (UID: "e7173830-627e-4f39-b843-5ced3d7b5efa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.763280 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7173830-627e-4f39-b843-5ced3d7b5efa-kube-api-access-ftkdj" (OuterVolumeSpecName: "kube-api-access-ftkdj") pod "e7173830-627e-4f39-b843-5ced3d7b5efa" (UID: "e7173830-627e-4f39-b843-5ced3d7b5efa"). InnerVolumeSpecName "kube-api-access-ftkdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.854015 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.858555 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7173830-627e-4f39-b843-5ced3d7b5efa-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.858580 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftkdj\" (UniqueName: \"kubernetes.io/projected/e7173830-627e-4f39-b843-5ced3d7b5efa-kube-api-access-ftkdj\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.959490 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-dns-svc\") pod \"e1d28983-0802-4233-b388-506681c95edd\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.959880 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-config\") pod \"e1d28983-0802-4233-b388-506681c95edd\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.959936 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-sb\") pod \"e1d28983-0802-4233-b388-506681c95edd\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.959961 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plr9s\" (UniqueName: \"kubernetes.io/projected/e1d28983-0802-4233-b388-506681c95edd-kube-api-access-plr9s\") pod \"e1d28983-0802-4233-b388-506681c95edd\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.960031 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-nb\") pod \"e1d28983-0802-4233-b388-506681c95edd\" (UID: \"e1d28983-0802-4233-b388-506681c95edd\") " Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.963229 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d28983-0802-4233-b388-506681c95edd-kube-api-access-plr9s" (OuterVolumeSpecName: "kube-api-access-plr9s") pod "e1d28983-0802-4233-b388-506681c95edd" (UID: "e1d28983-0802-4233-b388-506681c95edd"). InnerVolumeSpecName "kube-api-access-plr9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.995391 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1d28983-0802-4233-b388-506681c95edd" (UID: "e1d28983-0802-4233-b388-506681c95edd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.996925 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1d28983-0802-4233-b388-506681c95edd" (UID: "e1d28983-0802-4233-b388-506681c95edd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:18 crc kubenswrapper[4660]: I1129 07:38:18.997826 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-config" (OuterVolumeSpecName: "config") pod "e1d28983-0802-4233-b388-506681c95edd" (UID: "e1d28983-0802-4233-b388-506681c95edd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.000436 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1d28983-0802-4233-b388-506681c95edd" (UID: "e1d28983-0802-4233-b388-506681c95edd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.062387 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.062425 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.062436 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.062449 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plr9s\" (UniqueName: \"kubernetes.io/projected/e1d28983-0802-4233-b388-506681c95edd-kube-api-access-plr9s\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.062460 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d28983-0802-4233-b388-506681c95edd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.222000 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-99db-account-create-update-6xmmv" event={"ID":"c5baff9b-19db-4ae1-b016-dce7f4f84f1a","Type":"ContainerDied","Data":"ceb2e800dd5a2257baebbdedc25e1327bd3f214b5788532b2573f91ac625baab"} Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.222053 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb2e800dd5a2257baebbdedc25e1327bd3f214b5788532b2573f91ac625baab" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.222015 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-99db-account-create-update-6xmmv" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.223749 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mpdjp" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.223929 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mpdjp" event={"ID":"b1db5943-0f0c-4b0f-827d-297cf4773210","Type":"ContainerDied","Data":"a84aa7b263818f6b0d8cc02b711da5f061974dfec2e96582b260a43944568d67"} Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.223968 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84aa7b263818f6b0d8cc02b711da5f061974dfec2e96582b260a43944568d67" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.225951 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fm77f" event={"ID":"e7173830-627e-4f39-b843-5ced3d7b5efa","Type":"ContainerDied","Data":"599630bcaaad415d7bd246ab7dd670e9778e85fee2d960c4f47615ccbbe2839c"} Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.225987 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="599630bcaaad415d7bd246ab7dd670e9778e85fee2d960c4f47615ccbbe2839c" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.226007 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fm77f" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.227830 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ac6-account-create-update-sqpwx" event={"ID":"68cb8551-52db-4811-ab40-fa77d412662a","Type":"ContainerDied","Data":"5119c62860de0298ed04d1794ce1e2edda2f250f644452a5c8f8f9d4ef09da65"} Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.227856 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5119c62860de0298ed04d1794ce1e2edda2f250f644452a5c8f8f9d4ef09da65" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.227891 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ac6-account-create-update-sqpwx" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.231354 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" event={"ID":"e1d28983-0802-4233-b388-506681c95edd","Type":"ContainerDied","Data":"741e1242a2d1ad13dc5b7e5b20e26c8bb211b36e140675a52a54eea1b9613372"} Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.231416 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-k95b5" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.231446 4660 scope.go:117] "RemoveContainer" containerID="12e4b3443a159c7a4d826e7b277a52f684976e6d52cf0aad9be435eebb152806" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.234810 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dwvvj" event={"ID":"1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c","Type":"ContainerDied","Data":"c8f70b2c74db4b005acbfbec7e258f98f798f2d32565fa6aa52dfaa29ae2f618"} Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.234842 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8f70b2c74db4b005acbfbec7e258f98f798f2d32565fa6aa52dfaa29ae2f618" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.234803 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dwvvj" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.237106 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4d27-account-create-update-sbsxh" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.237786 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4d27-account-create-update-sbsxh" event={"ID":"85987bf0-2e03-4db1-a740-0184d42b2540","Type":"ContainerDied","Data":"368720d6e2abf6352b18960794b6a5977a4a3ab6a05f2a344fbd2da4a79a5582"} Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.237981 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="368720d6e2abf6352b18960794b6a5977a4a3ab6a05f2a344fbd2da4a79a5582" Nov 29 07:38:19 crc kubenswrapper[4660]: E1129 07:38:19.242591 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-vnbp6" podUID="d1e6819d-5ed0-4388-a7da-8d22e20ad10c" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.278988 4660 scope.go:117] "RemoveContainer" containerID="2b9ba5d3215ead505dbacac3ad7d78532c14eaf355d36879c47cc0059ca0d587" Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.324865 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-k95b5"] Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.336242 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-k95b5"] Nov 29 07:38:19 crc kubenswrapper[4660]: I1129 07:38:19.709661 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d28983-0802-4233-b388-506681c95edd" path="/var/lib/kubelet/pods/e1d28983-0802-4233-b388-506681c95edd/volumes" Nov 29 07:38:26 crc kubenswrapper[4660]: E1129 07:38:26.696402 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-gcb27" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" Nov 29 07:38:34 crc kubenswrapper[4660]: I1129 07:38:34.369081 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vnbp6" event={"ID":"d1e6819d-5ed0-4388-a7da-8d22e20ad10c","Type":"ContainerStarted","Data":"52bfb9ff5e3725a9d18e8ab5cbda7ef807136ab8ed06a0b1c4ce534ff4101d56"} Nov 29 07:38:34 crc kubenswrapper[4660]: I1129 07:38:34.399785 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vnbp6" podStartSLOduration=3.424141695 podStartE2EDuration="34.399763006s" podCreationTimestamp="2025-11-29 07:38:00 +0000 UTC" firstStartedPulling="2025-11-29 07:38:01.912852257 +0000 UTC m=+1372.466382156" lastFinishedPulling="2025-11-29 07:38:32.888473528 +0000 UTC m=+1403.442003467" observedRunningTime="2025-11-29 07:38:34.399083267 +0000 UTC m=+1404.952613166" watchObservedRunningTime="2025-11-29 07:38:34.399763006 +0000 UTC m=+1404.953292905" Nov 29 07:38:35 crc kubenswrapper[4660]: I1129 07:38:35.500690 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:38:35 crc kubenswrapper[4660]: I1129 07:38:35.500769 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:38:35 crc kubenswrapper[4660]: I1129 07:38:35.500828 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:38:35 crc kubenswrapper[4660]: I1129 07:38:35.501880 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7722213ef79c3c66cb7ac343ca03425de7ecbfb47f9db3895575925b4ea79e47"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:38:35 crc kubenswrapper[4660]: I1129 07:38:35.501942 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://7722213ef79c3c66cb7ac343ca03425de7ecbfb47f9db3895575925b4ea79e47" gracePeriod=600 Nov 29 07:38:36 crc kubenswrapper[4660]: I1129 07:38:36.384666 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="7722213ef79c3c66cb7ac343ca03425de7ecbfb47f9db3895575925b4ea79e47" exitCode=0 Nov 29 07:38:36 crc kubenswrapper[4660]: I1129 07:38:36.384767 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"7722213ef79c3c66cb7ac343ca03425de7ecbfb47f9db3895575925b4ea79e47"} Nov 29 07:38:36 crc kubenswrapper[4660]: I1129 07:38:36.384985 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c"} Nov 29 07:38:36 crc kubenswrapper[4660]: I1129 07:38:36.385009 4660 scope.go:117] "RemoveContainer" containerID="bd511a85552f8f6a0486302ddd3dd88b243fb575cbf96f9f78b0be146b756d4a" Nov 29 07:38:43 crc kubenswrapper[4660]: I1129 07:38:43.467958 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gcb27" event={"ID":"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf","Type":"ContainerStarted","Data":"ec17e27d5188783f27817fc57273a36da1772c39c9c40da96513f83e73893bb2"} Nov 29 07:38:43 crc kubenswrapper[4660]: I1129 07:38:43.487283 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gcb27" podStartSLOduration=3.46904927 podStartE2EDuration="1m15.487263067s" podCreationTimestamp="2025-11-29 07:37:28 +0000 UTC" firstStartedPulling="2025-11-29 07:37:30.30245328 +0000 UTC m=+1340.855983179" lastFinishedPulling="2025-11-29 07:38:42.320667077 +0000 UTC m=+1412.874196976" observedRunningTime="2025-11-29 07:38:43.485280582 +0000 UTC m=+1414.038810471" watchObservedRunningTime="2025-11-29 07:38:43.487263067 +0000 UTC m=+1414.040792966" Nov 29 07:38:44 crc kubenswrapper[4660]: I1129 07:38:44.483092 4660 generic.go:334] "Generic (PLEG): container finished" podID="d1e6819d-5ed0-4388-a7da-8d22e20ad10c" containerID="52bfb9ff5e3725a9d18e8ab5cbda7ef807136ab8ed06a0b1c4ce534ff4101d56" exitCode=0 Nov 29 07:38:44 crc kubenswrapper[4660]: I1129 07:38:44.483138 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vnbp6" event={"ID":"d1e6819d-5ed0-4388-a7da-8d22e20ad10c","Type":"ContainerDied","Data":"52bfb9ff5e3725a9d18e8ab5cbda7ef807136ab8ed06a0b1c4ce534ff4101d56"} Nov 29 07:38:45 crc kubenswrapper[4660]: I1129 07:38:45.805184 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:45 crc kubenswrapper[4660]: I1129 07:38:45.940098 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-combined-ca-bundle\") pod \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " Nov 29 07:38:45 crc kubenswrapper[4660]: I1129 07:38:45.940198 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knqf2\" (UniqueName: \"kubernetes.io/projected/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-kube-api-access-knqf2\") pod \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " Nov 29 07:38:45 crc kubenswrapper[4660]: I1129 07:38:45.940278 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-config-data\") pod \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\" (UID: \"d1e6819d-5ed0-4388-a7da-8d22e20ad10c\") " Nov 29 07:38:45 crc kubenswrapper[4660]: I1129 07:38:45.950834 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-kube-api-access-knqf2" (OuterVolumeSpecName: "kube-api-access-knqf2") pod "d1e6819d-5ed0-4388-a7da-8d22e20ad10c" (UID: "d1e6819d-5ed0-4388-a7da-8d22e20ad10c"). InnerVolumeSpecName "kube-api-access-knqf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:38:45 crc kubenswrapper[4660]: I1129 07:38:45.964066 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1e6819d-5ed0-4388-a7da-8d22e20ad10c" (UID: "d1e6819d-5ed0-4388-a7da-8d22e20ad10c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.002988 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-config-data" (OuterVolumeSpecName: "config-data") pod "d1e6819d-5ed0-4388-a7da-8d22e20ad10c" (UID: "d1e6819d-5ed0-4388-a7da-8d22e20ad10c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.042173 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knqf2\" (UniqueName: \"kubernetes.io/projected/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-kube-api-access-knqf2\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.042216 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.042230 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e6819d-5ed0-4388-a7da-8d22e20ad10c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.498966 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vnbp6" event={"ID":"d1e6819d-5ed0-4388-a7da-8d22e20ad10c","Type":"ContainerDied","Data":"25daafc8db718665e928409c96f808e1e3ebc5aac948b61a01931dbf3a55be48"} Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.499190 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25daafc8db718665e928409c96f808e1e3ebc5aac948b61a01931dbf3a55be48" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.499017 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vnbp6" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.869931 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fgbj9"] Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870768 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e6819d-5ed0-4388-a7da-8d22e20ad10c" containerName="keystone-db-sync" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870789 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e6819d-5ed0-4388-a7da-8d22e20ad10c" containerName="keystone-db-sync" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870806 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85987bf0-2e03-4db1-a740-0184d42b2540" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870815 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="85987bf0-2e03-4db1-a740-0184d42b2540" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870828 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cb8551-52db-4811-ab40-fa77d412662a" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870837 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cb8551-52db-4811-ab40-fa77d412662a" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870857 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="init" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870864 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="init" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870891 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="dnsmasq-dns" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870899 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="dnsmasq-dns" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870910 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1db5943-0f0c-4b0f-827d-297cf4773210" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870918 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1db5943-0f0c-4b0f-827d-297cf4773210" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870930 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5baff9b-19db-4ae1-b016-dce7f4f84f1a" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870937 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5baff9b-19db-4ae1-b016-dce7f4f84f1a" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870950 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870958 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: E1129 07:38:46.870973 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7173830-627e-4f39-b843-5ced3d7b5efa" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.870980 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7173830-627e-4f39-b843-5ced3d7b5efa" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871158 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e6819d-5ed0-4388-a7da-8d22e20ad10c" containerName="keystone-db-sync" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871190 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871201 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1db5943-0f0c-4b0f-827d-297cf4773210" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871216 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="85987bf0-2e03-4db1-a740-0184d42b2540" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871225 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="68cb8551-52db-4811-ab40-fa77d412662a" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871241 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7173830-627e-4f39-b843-5ced3d7b5efa" containerName="mariadb-database-create" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871254 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5baff9b-19db-4ae1-b016-dce7f4f84f1a" containerName="mariadb-account-create-update" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871265 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d28983-0802-4233-b388-506681c95edd" containerName="dnsmasq-dns" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.871967 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.885655 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-m4n62"] Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.885780 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.885825 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.885841 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.886084 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.886106 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvsdd" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.886957 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.917993 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fgbj9"] Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.930695 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-m4n62"] Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957189 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-config-data\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957254 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-scripts\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957294 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957343 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-svc\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957367 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrdk\" (UniqueName: \"kubernetes.io/projected/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-kube-api-access-wgrdk\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957405 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-fernet-keys\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957437 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqc5p\" (UniqueName: \"kubernetes.io/projected/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-kube-api-access-gqc5p\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957466 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957499 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-combined-ca-bundle\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957548 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-credential-keys\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957572 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:46 crc kubenswrapper[4660]: I1129 07:38:46.957733 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-config\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068221 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-scripts\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068291 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068352 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-svc\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068388 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgrdk\" (UniqueName: \"kubernetes.io/projected/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-kube-api-access-wgrdk\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068443 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-fernet-keys\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068499 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqc5p\" (UniqueName: \"kubernetes.io/projected/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-kube-api-access-gqc5p\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068539 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068585 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-combined-ca-bundle\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068680 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-credential-keys\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068715 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068777 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-config\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.068839 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-config-data\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.084170 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.085277 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.086317 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-svc\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.086562 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.095081 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-config\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.097691 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-fernet-keys\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.100136 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-credential-keys\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.138997 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-config-data\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.140144 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-scripts\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.163319 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7j498"] Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.163777 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-combined-ca-bundle\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.163787 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgrdk\" (UniqueName: \"kubernetes.io/projected/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-kube-api-access-wgrdk\") pod \"keystone-bootstrap-fgbj9\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.164995 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqc5p\" (UniqueName: \"kubernetes.io/projected/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-kube-api-access-gqc5p\") pod \"dnsmasq-dns-5b868669f-m4n62\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.168691 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.180228 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2nmzh" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.180254 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.180434 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.190106 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7j498"] Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.208004 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.217027 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.224273 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75b667c9-52pp8"] Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.226339 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.230123 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.230370 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.230551 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.230693 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2n9xq" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274084 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-config-data\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274148 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdn9z\" (UniqueName: \"kubernetes.io/projected/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-kube-api-access-tdn9z\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274228 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-config-data\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274291 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-scripts\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274350 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-scripts\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274387 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-horizon-secret-key\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274430 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5bsx\" (UniqueName: \"kubernetes.io/projected/e34f6bca-d788-40bf-9065-f7f331a8f8d9-kube-api-access-t5bsx\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274458 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-combined-ca-bundle\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.274895 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75b667c9-52pp8"] Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.282316 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-db-sync-config-data\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.282545 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-logs\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.282645 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e34f6bca-d788-40bf-9065-f7f331a8f8d9-etc-machine-id\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.375693 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-g6zvg"] Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.377277 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385062 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385241 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385350 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fwv82" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385378 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-scripts\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385430 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-scripts\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385462 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-horizon-secret-key\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385485 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5bsx\" (UniqueName: \"kubernetes.io/projected/e34f6bca-d788-40bf-9065-f7f331a8f8d9-kube-api-access-t5bsx\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385510 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-combined-ca-bundle\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385541 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-db-sync-config-data\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385566 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-logs\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385590 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e34f6bca-d788-40bf-9065-f7f331a8f8d9-etc-machine-id\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385636 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-config-data\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385653 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdn9z\" (UniqueName: \"kubernetes.io/projected/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-kube-api-access-tdn9z\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.385690 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-config-data\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.386696 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e34f6bca-d788-40bf-9065-f7f331a8f8d9-etc-machine-id\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.388212 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-logs\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.388746 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-scripts\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.406260 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-config-data\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.419557 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-config-data\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.426714 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-horizon-secret-key\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.427215 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-combined-ca-bundle\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.430214 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-db-sync-config-data\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.436512 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-scripts\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.452066 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5bsx\" (UniqueName: \"kubernetes.io/projected/e34f6bca-d788-40bf-9065-f7f331a8f8d9-kube-api-access-t5bsx\") pod \"cinder-db-sync-7j498\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.468459 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdn9z\" (UniqueName: \"kubernetes.io/projected/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-kube-api-access-tdn9z\") pod \"horizon-75b667c9-52pp8\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.488216 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798sh\" (UniqueName: \"kubernetes.io/projected/50a095fb-8968-4986-b063-8652e7e2cd0b-kube-api-access-798sh\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.488314 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-combined-ca-bundle\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.488362 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-config\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.549901 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g6zvg"] Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.550032 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7j498" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.578111 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.593528 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-combined-ca-bundle\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.593593 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-config\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.593697 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-798sh\" (UniqueName: \"kubernetes.io/projected/50a095fb-8968-4986-b063-8652e7e2cd0b-kube-api-access-798sh\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.621148 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-combined-ca-bundle\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.621257 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-config\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.625095 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-798sh\" (UniqueName: \"kubernetes.io/projected/50a095fb-8968-4986-b063-8652e7e2cd0b-kube-api-access-798sh\") pod \"neutron-db-sync-g6zvg\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.637279 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wcwr9"] Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.638497 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.651419 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.654136 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-57df7" Nov 29 07:38:47 crc kubenswrapper[4660]: I1129 07:38:47.663752 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wcwr9"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.703928 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dflgj\" (UniqueName: \"kubernetes.io/projected/07f9cecd-58f3-4e48-acfc-6de8cce380df-kube-api-access-dflgj\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.704011 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-combined-ca-bundle\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.704067 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-db-sync-config-data\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.744465 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6dc476b8c7-6svhd"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.745740 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.768698 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6dc476b8c7-6svhd"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.806426 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-m4n62"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.807212 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvf4\" (UniqueName: \"kubernetes.io/projected/aeb8043c-7084-41e3-95b0-03e6d70f02f3-kube-api-access-dwvf4\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.810764 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeb8043c-7084-41e3-95b0-03e6d70f02f3-horizon-secret-key\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.810786 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-config-data\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.810910 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-scripts\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.810944 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dflgj\" (UniqueName: \"kubernetes.io/projected/07f9cecd-58f3-4e48-acfc-6de8cce380df-kube-api-access-dflgj\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.810982 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-combined-ca-bundle\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.811012 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-db-sync-config-data\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.811050 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb8043c-7084-41e3-95b0-03e6d70f02f3-logs\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.820639 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.832752 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-db-sync-config-data\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.862177 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-combined-ca-bundle\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.882944 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-vxrqr"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.886185 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.917579 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb8043c-7084-41e3-95b0-03e6d70f02f3-logs\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.917650 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvf4\" (UniqueName: \"kubernetes.io/projected/aeb8043c-7084-41e3-95b0-03e6d70f02f3-kube-api-access-dwvf4\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.917698 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeb8043c-7084-41e3-95b0-03e6d70f02f3-horizon-secret-key\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.917722 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-config-data\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.917791 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-scripts\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.918717 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-scripts\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.919268 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb8043c-7084-41e3-95b0-03e6d70f02f3-logs\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.920687 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-config-data\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.928043 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeb8043c-7084-41e3-95b0-03e6d70f02f3-horizon-secret-key\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.939961 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bb4k8" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.942813 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.957386 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.960526 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dflgj\" (UniqueName: \"kubernetes.io/projected/07f9cecd-58f3-4e48-acfc-6de8cce380df-kube-api-access-dflgj\") pod \"barbican-db-sync-wcwr9\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.962667 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vxrqr"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:47.997494 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvf4\" (UniqueName: \"kubernetes.io/projected/aeb8043c-7084-41e3-95b0-03e6d70f02f3-kube-api-access-dwvf4\") pod \"horizon-6dc476b8c7-6svhd\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.015025 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.025119 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-scripts\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.025182 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt8rs\" (UniqueName: \"kubernetes.io/projected/a8e0c494-1877-49d7-8877-308fb75d13b1-kube-api-access-gt8rs\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.025237 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-config-data\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.025273 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e0c494-1877-49d7-8877-308fb75d13b1-logs\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.025313 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-combined-ca-bundle\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.033026 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.037242 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.056947 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.057555 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.095884 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.106665 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-s49ch"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.108089 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.125018 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126495 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-config-data\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126558 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e0c494-1877-49d7-8877-308fb75d13b1-logs\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126628 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-combined-ca-bundle\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126664 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-scripts\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126687 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-run-httpd\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126706 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-config-data\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126726 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-log-httpd\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126777 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-scripts\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126796 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgl55\" (UniqueName: \"kubernetes.io/projected/994934b0-1ed3-4a63-b231-34e923c9a2ad-kube-api-access-jgl55\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126815 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt8rs\" (UniqueName: \"kubernetes.io/projected/a8e0c494-1877-49d7-8877-308fb75d13b1-kube-api-access-gt8rs\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126845 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.126868 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.127867 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e0c494-1877-49d7-8877-308fb75d13b1-logs\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.142802 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-scripts\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.155354 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-config-data\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.155875 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-combined-ca-bundle\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.173188 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fgbj9"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.192489 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-s49ch"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.200708 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt8rs\" (UniqueName: \"kubernetes.io/projected/a8e0c494-1877-49d7-8877-308fb75d13b1-kube-api-access-gt8rs\") pod \"placement-db-sync-vxrqr\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229149 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-svc\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229216 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp6qc\" (UniqueName: \"kubernetes.io/projected/05daf086-18b7-460e-8a12-519d25e17862-kube-api-access-qp6qc\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229260 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-scripts\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229289 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-run-httpd\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229313 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-config-data\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229337 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-log-httpd\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229393 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-config\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229418 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229445 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgl55\" (UniqueName: \"kubernetes.io/projected/994934b0-1ed3-4a63-b231-34e923c9a2ad-kube-api-access-jgl55\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229477 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229506 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229530 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.229641 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.231443 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-run-httpd\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.232097 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-log-httpd\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.236917 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.240463 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vxrqr" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.242763 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-config-data\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.252578 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.254195 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-scripts\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.275352 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgl55\" (UniqueName: \"kubernetes.io/projected/994934b0-1ed3-4a63-b231-34e923c9a2ad-kube-api-access-jgl55\") pod \"ceilometer-0\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.330956 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-config\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.330993 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.331045 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.331121 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.331142 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-svc\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.331159 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp6qc\" (UniqueName: \"kubernetes.io/projected/05daf086-18b7-460e-8a12-519d25e17862-kube-api-access-qp6qc\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.332449 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-config\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.332677 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.333098 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-svc\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.333457 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.333675 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.370397 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp6qc\" (UniqueName: \"kubernetes.io/projected/05daf086-18b7-460e-8a12-519d25e17862-kube-api-access-qp6qc\") pod \"dnsmasq-dns-cf78879c9-s49ch\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.407205 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:48.655536 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:49.600123 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fgbj9" event={"ID":"edfa6c4b-ad09-4e8d-9c6b-8cb236398660","Type":"ContainerStarted","Data":"f5a2a42ea025b92c234184c531f58ab50bf05373d00bd59d888a6d25997d922f"} Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.010215 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dc476b8c7-6svhd"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.044805 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-848469c5f5-nsstg"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.046500 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.065714 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-848469c5f5-nsstg"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.167702 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab21846a-6632-4068-b7d7-bd8ec0750a64-logs\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.167762 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-scripts\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.167803 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nbpr\" (UniqueName: \"kubernetes.io/projected/ab21846a-6632-4068-b7d7-bd8ec0750a64-kube-api-access-2nbpr\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.167874 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab21846a-6632-4068-b7d7-bd8ec0750a64-horizon-secret-key\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.167899 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-config-data\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.269305 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-scripts\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.269365 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nbpr\" (UniqueName: \"kubernetes.io/projected/ab21846a-6632-4068-b7d7-bd8ec0750a64-kube-api-access-2nbpr\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.269443 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab21846a-6632-4068-b7d7-bd8ec0750a64-horizon-secret-key\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.269471 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-config-data\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:50.269505 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab21846a-6632-4068-b7d7-bd8ec0750a64-logs\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:52.431430 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab21846a-6632-4068-b7d7-bd8ec0750a64-logs\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:52.431993 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-scripts\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:52.432307 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-config-data\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:52.440346 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nbpr\" (UniqueName: \"kubernetes.io/projected/ab21846a-6632-4068-b7d7-bd8ec0750a64-kube-api-access-2nbpr\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:52.440828 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab21846a-6632-4068-b7d7-bd8ec0750a64-horizon-secret-key\") pod \"horizon-848469c5f5-nsstg\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:52.732640 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:55.643565 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fgbj9" event={"ID":"edfa6c4b-ad09-4e8d-9c6b-8cb236398660","Type":"ContainerStarted","Data":"29c6517a1bbddc90fad8042120d5400394a40f86f0d3783d60b89e95f470b4ff"} Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:55.668694 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fgbj9" podStartSLOduration=9.668668559 podStartE2EDuration="9.668668559s" podCreationTimestamp="2025-11-29 07:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:38:55.661884273 +0000 UTC m=+1426.215414172" watchObservedRunningTime="2025-11-29 07:38:55.668668559 +0000 UTC m=+1426.222198468" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.745168 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75b667c9-52pp8"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.803102 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76565fb74d-wgqb4"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.804466 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.809789 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.817875 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76565fb74d-wgqb4"] Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.923511 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-tls-certs\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.923729 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-combined-ca-bundle\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.923760 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-config-data\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.923786 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-secret-key\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.923806 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-logs\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.923896 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8dr6\" (UniqueName: \"kubernetes.io/projected/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-kube-api-access-b8dr6\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.923916 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-scripts\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:56 crc kubenswrapper[4660]: I1129 07:38:56.953082 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-848469c5f5-nsstg"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.002069 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d8477fd94-v56g5"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.003383 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.024801 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8dr6\" (UniqueName: \"kubernetes.io/projected/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-kube-api-access-b8dr6\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.024849 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-scripts\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.024906 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-tls-certs\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.024934 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-combined-ca-bundle\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.024955 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-config-data\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.024977 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-secret-key\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.025002 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-logs\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.025422 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-logs\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.026811 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-scripts\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.027567 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-config-data\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.036660 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-secret-key\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.038199 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-tls-certs\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.041127 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d8477fd94-v56g5"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.050545 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-combined-ca-bundle\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.075077 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8dr6\" (UniqueName: \"kubernetes.io/projected/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-kube-api-access-b8dr6\") pod \"horizon-76565fb74d-wgqb4\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.126893 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxpt\" (UniqueName: \"kubernetes.io/projected/953f9580-5907-45bf-ae44-e48149acc44c-kube-api-access-5fxpt\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.126934 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-horizon-tls-certs\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.126977 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953f9580-5907-45bf-ae44-e48149acc44c-scripts\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.126997 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953f9580-5907-45bf-ae44-e48149acc44c-logs\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.127049 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-combined-ca-bundle\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.127073 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/953f9580-5907-45bf-ae44-e48149acc44c-config-data\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.127107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-horizon-secret-key\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.130748 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.166242 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.229336 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxpt\" (UniqueName: \"kubernetes.io/projected/953f9580-5907-45bf-ae44-e48149acc44c-kube-api-access-5fxpt\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.229393 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-horizon-tls-certs\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.229423 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953f9580-5907-45bf-ae44-e48149acc44c-scripts\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.229446 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953f9580-5907-45bf-ae44-e48149acc44c-logs\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.229504 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-combined-ca-bundle\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.229529 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/953f9580-5907-45bf-ae44-e48149acc44c-config-data\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.229569 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-horizon-secret-key\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.233083 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-horizon-secret-key\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.236740 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953f9580-5907-45bf-ae44-e48149acc44c-logs\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.236978 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-horizon-tls-certs\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.237351 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953f9580-5907-45bf-ae44-e48149acc44c-scripts\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.238210 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/953f9580-5907-45bf-ae44-e48149acc44c-config-data\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.243836 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953f9580-5907-45bf-ae44-e48149acc44c-combined-ca-bundle\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.257458 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxpt\" (UniqueName: \"kubernetes.io/projected/953f9580-5907-45bf-ae44-e48149acc44c-kube-api-access-5fxpt\") pod \"horizon-5d8477fd94-v56g5\" (UID: \"953f9580-5907-45bf-ae44-e48149acc44c\") " pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.319742 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.676860 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g6zvg"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.682765 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vxrqr"] Nov 29 07:38:57 crc kubenswrapper[4660]: W1129 07:38:57.704731 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e0c494_1877_49d7_8877_308fb75d13b1.slice/crio-51679fd3feff33ae2d587fbfd90430a53fb6becb8bfa75064c747147ccadcc24 WatchSource:0}: Error finding container 51679fd3feff33ae2d587fbfd90430a53fb6becb8bfa75064c747147ccadcc24: Status 404 returned error can't find the container with id 51679fd3feff33ae2d587fbfd90430a53fb6becb8bfa75064c747147ccadcc24 Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.729252 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-s49ch"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.733016 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.756396 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75b667c9-52pp8"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.795751 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.806676 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-848469c5f5-nsstg"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.814583 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dc476b8c7-6svhd"] Nov 29 07:38:57 crc kubenswrapper[4660]: W1129 07:38:57.823778 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab21846a_6632_4068_b7d7_bd8ec0750a64.slice/crio-24dccd745dc813cb8df811d6777cb4106b01a08d9846e4a7467fd6afe990f68a WatchSource:0}: Error finding container 24dccd745dc813cb8df811d6777cb4106b01a08d9846e4a7467fd6afe990f68a: Status 404 returned error can't find the container with id 24dccd745dc813cb8df811d6777cb4106b01a08d9846e4a7467fd6afe990f68a Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.830103 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-m4n62"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.840914 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7j498"] Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.849661 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wcwr9"] Nov 29 07:38:57 crc kubenswrapper[4660]: W1129 07:38:57.876823 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f9cecd_58f3_4e48_acfc_6de8cce380df.slice/crio-39022821ff62d50f13cf3db525db1532ffe8b9ceca48582871162536efedda61 WatchSource:0}: Error finding container 39022821ff62d50f13cf3db525db1532ffe8b9ceca48582871162536efedda61: Status 404 returned error can't find the container with id 39022821ff62d50f13cf3db525db1532ffe8b9ceca48582871162536efedda61 Nov 29 07:38:57 crc kubenswrapper[4660]: I1129 07:38:57.906704 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d8477fd94-v56g5"] Nov 29 07:38:57 crc kubenswrapper[4660]: W1129 07:38:57.924911 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953f9580_5907_45bf_ae44_e48149acc44c.slice/crio-50ad688a5b11d440370673f942101162e71543e886ce3efe65444889bb4616bf WatchSource:0}: Error finding container 50ad688a5b11d440370673f942101162e71543e886ce3efe65444889bb4616bf: Status 404 returned error can't find the container with id 50ad688a5b11d440370673f942101162e71543e886ce3efe65444889bb4616bf Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.062914 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76565fb74d-wgqb4"] Nov 29 07:38:58 crc kubenswrapper[4660]: W1129 07:38:58.086471 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b1c3a22_b3b7_4403_b4d5_263d822b3fab.slice/crio-95b8917e45e9412fe92688284361f7cad245c6b37f35eb1b2cd71e7d2843fa4d WatchSource:0}: Error finding container 95b8917e45e9412fe92688284361f7cad245c6b37f35eb1b2cd71e7d2843fa4d: Status 404 returned error can't find the container with id 95b8917e45e9412fe92688284361f7cad245c6b37f35eb1b2cd71e7d2843fa4d Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.695797 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerStarted","Data":"aa274ed67d854999818914e50d939357464925d8ff5ea5fd6eb4cde1581e78e6"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.697293 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d8477fd94-v56g5" event={"ID":"953f9580-5907-45bf-ae44-e48149acc44c","Type":"ContainerStarted","Data":"50ad688a5b11d440370673f942101162e71543e886ce3efe65444889bb4616bf"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.698777 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-848469c5f5-nsstg" event={"ID":"ab21846a-6632-4068-b7d7-bd8ec0750a64","Type":"ContainerStarted","Data":"24dccd745dc813cb8df811d6777cb4106b01a08d9846e4a7467fd6afe990f68a"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.700050 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wcwr9" event={"ID":"07f9cecd-58f3-4e48-acfc-6de8cce380df","Type":"ContainerStarted","Data":"39022821ff62d50f13cf3db525db1532ffe8b9ceca48582871162536efedda61"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.701103 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-m4n62" event={"ID":"6cc3b2aa-48e3-4989-8b86-a99155f3ee15","Type":"ContainerStarted","Data":"2aaabd4d4b8ae846a2308cdf05162c8e4a863de2f1a73241b42c6059a787b127"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.702153 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7j498" event={"ID":"e34f6bca-d788-40bf-9065-f7f331a8f8d9","Type":"ContainerStarted","Data":"d8c9c9b8bbe607b4ef0f9d3d50556852d1c45293cb50b71cc994e753c6243e17"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.703540 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b667c9-52pp8" event={"ID":"3163df6e-6d10-47d4-b5b0-bd4cb4073e33","Type":"ContainerStarted","Data":"7f9a19329989d93b5614b518ef74d174e2f66be5672ad354e1ffa5fcc5954d30"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.704697 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g6zvg" event={"ID":"50a095fb-8968-4986-b063-8652e7e2cd0b","Type":"ContainerStarted","Data":"dc63e6a5dbf9db7cde6a6c10feb6b0f7e7fdf53e0a0b047e6972f778a44392b2"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.706127 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" event={"ID":"05daf086-18b7-460e-8a12-519d25e17862","Type":"ContainerStarted","Data":"0b779c5bcfec541d06e4ed32cd95698f3724b818598348f9dbded4a17d0708b6"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.706217 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" event={"ID":"05daf086-18b7-460e-8a12-519d25e17862","Type":"ContainerStarted","Data":"a6a123d3e2530c95c1fe4a77cb77254ac74165039f9010d05e6a02f5c4de5235"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.707388 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vxrqr" event={"ID":"a8e0c494-1877-49d7-8877-308fb75d13b1","Type":"ContainerStarted","Data":"51679fd3feff33ae2d587fbfd90430a53fb6becb8bfa75064c747147ccadcc24"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.708437 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc476b8c7-6svhd" event={"ID":"aeb8043c-7084-41e3-95b0-03e6d70f02f3","Type":"ContainerStarted","Data":"85264f5865adfb58078ca8b5dbaa59a28dda77a9b8320890edf911956715a5cd"} Nov 29 07:38:58 crc kubenswrapper[4660]: I1129 07:38:58.709642 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76565fb74d-wgqb4" event={"ID":"3b1c3a22-b3b7-4403-b4d5-263d822b3fab","Type":"ContainerStarted","Data":"95b8917e45e9412fe92688284361f7cad245c6b37f35eb1b2cd71e7d2843fa4d"} Nov 29 07:38:59 crc kubenswrapper[4660]: I1129 07:38:59.719740 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g6zvg" event={"ID":"50a095fb-8968-4986-b063-8652e7e2cd0b","Type":"ContainerStarted","Data":"799ebf3bd6cc4f594dd6ed33f3794df92352c8f1e93d05ef46ab674c6481c3b6"} Nov 29 07:39:00 crc kubenswrapper[4660]: I1129 07:39:00.755236 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-m4n62" event={"ID":"6cc3b2aa-48e3-4989-8b86-a99155f3ee15","Type":"ContainerStarted","Data":"2b9ecca7c07af6dee877caa709afa02dd08fb82001ad5d015207d683b98027aa"} Nov 29 07:39:01 crc kubenswrapper[4660]: I1129 07:39:01.768039 4660 generic.go:334] "Generic (PLEG): container finished" podID="05daf086-18b7-460e-8a12-519d25e17862" containerID="0b779c5bcfec541d06e4ed32cd95698f3724b818598348f9dbded4a17d0708b6" exitCode=0 Nov 29 07:39:01 crc kubenswrapper[4660]: I1129 07:39:01.768216 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" event={"ID":"05daf086-18b7-460e-8a12-519d25e17862","Type":"ContainerDied","Data":"0b779c5bcfec541d06e4ed32cd95698f3724b818598348f9dbded4a17d0708b6"} Nov 29 07:39:01 crc kubenswrapper[4660]: I1129 07:39:01.806750 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-g6zvg" podStartSLOduration=14.806735118 podStartE2EDuration="14.806735118s" podCreationTimestamp="2025-11-29 07:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:01.805080243 +0000 UTC m=+1432.358610162" watchObservedRunningTime="2025-11-29 07:39:01.806735118 +0000 UTC m=+1432.360265017" Nov 29 07:39:06 crc kubenswrapper[4660]: I1129 07:39:06.819726 4660 generic.go:334] "Generic (PLEG): container finished" podID="6cc3b2aa-48e3-4989-8b86-a99155f3ee15" containerID="2b9ecca7c07af6dee877caa709afa02dd08fb82001ad5d015207d683b98027aa" exitCode=0 Nov 29 07:39:06 crc kubenswrapper[4660]: I1129 07:39:06.819820 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-m4n62" event={"ID":"6cc3b2aa-48e3-4989-8b86-a99155f3ee15","Type":"ContainerDied","Data":"2b9ecca7c07af6dee877caa709afa02dd08fb82001ad5d015207d683b98027aa"} Nov 29 07:39:12 crc kubenswrapper[4660]: E1129 07:39:12.029384 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1243033476/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 29 07:39:12 crc kubenswrapper[4660]: E1129 07:39:12.030009 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt8rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-vxrqr_openstack(a8e0c494-1877-49d7-8877-308fb75d13b1): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1243033476/1\": happened during read: context canceled" logger="UnhandledError" Nov 29 07:39:12 crc kubenswrapper[4660]: E1129 07:39:12.032084 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1243033476/1\\\": happened during read: context canceled\"" pod="openstack/placement-db-sync-vxrqr" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" Nov 29 07:39:12 crc kubenswrapper[4660]: I1129 07:39:12.904652 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" event={"ID":"05daf086-18b7-460e-8a12-519d25e17862","Type":"ContainerStarted","Data":"bf4c43ba54149078240a3d3cdea24c70da0636d9a258862b30e383d9ad0aaca2"} Nov 29 07:39:12 crc kubenswrapper[4660]: I1129 07:39:12.904741 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:39:12 crc kubenswrapper[4660]: E1129 07:39:12.905797 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-vxrqr" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" Nov 29 07:39:12 crc kubenswrapper[4660]: I1129 07:39:12.927377 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" podStartSLOduration=25.927352663 podStartE2EDuration="25.927352663s" podCreationTimestamp="2025-11-29 07:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:12.920113194 +0000 UTC m=+1443.473643103" watchObservedRunningTime="2025-11-29 07:39:12.927352663 +0000 UTC m=+1443.480882562" Nov 29 07:39:18 crc kubenswrapper[4660]: I1129 07:39:18.657870 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:39:18 crc kubenswrapper[4660]: I1129 07:39:18.757389 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z265m"] Nov 29 07:39:18 crc kubenswrapper[4660]: I1129 07:39:18.758135 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" containerID="cri-o://7e109ebefa2a2c56a60bc1689cddf515bbb0ab72f61e7d44b92bafd07838e73e" gracePeriod=10 Nov 29 07:39:19 crc kubenswrapper[4660]: I1129 07:39:19.962006 4660 generic.go:334] "Generic (PLEG): container finished" podID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerID="7e109ebefa2a2c56a60bc1689cddf515bbb0ab72f61e7d44b92bafd07838e73e" exitCode=0 Nov 29 07:39:19 crc kubenswrapper[4660]: I1129 07:39:19.962060 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" event={"ID":"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f","Type":"ContainerDied","Data":"7e109ebefa2a2c56a60bc1689cddf515bbb0ab72f61e7d44b92bafd07838e73e"} Nov 29 07:39:23 crc kubenswrapper[4660]: I1129 07:39:23.522056 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 29 07:39:24 crc kubenswrapper[4660]: E1129 07:39:24.069935 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 29 07:39:24 crc kubenswrapper[4660]: E1129 07:39:24.070331 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dflgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wcwr9_openstack(07f9cecd-58f3-4e48-acfc-6de8cce380df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:24 crc kubenswrapper[4660]: E1129 07:39:24.071523 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wcwr9" podUID="07f9cecd-58f3-4e48-acfc-6de8cce380df" Nov 29 07:39:25 crc kubenswrapper[4660]: E1129 07:39:25.005752 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wcwr9" podUID="07f9cecd-58f3-4e48-acfc-6de8cce380df" Nov 29 07:39:27 crc kubenswrapper[4660]: I1129 07:39:27.021561 4660 generic.go:334] "Generic (PLEG): container finished" podID="edfa6c4b-ad09-4e8d-9c6b-8cb236398660" containerID="29c6517a1bbddc90fad8042120d5400394a40f86f0d3783d60b89e95f470b4ff" exitCode=0 Nov 29 07:39:27 crc kubenswrapper[4660]: I1129 07:39:27.021638 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fgbj9" event={"ID":"edfa6c4b-ad09-4e8d-9c6b-8cb236398660","Type":"ContainerDied","Data":"29c6517a1bbddc90fad8042120d5400394a40f86f0d3783d60b89e95f470b4ff"} Nov 29 07:39:28 crc kubenswrapper[4660]: I1129 07:39:28.521772 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.838326 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:39:29 crc kubenswrapper[4660]: E1129 07:39:29.843938 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:39:29 crc kubenswrapper[4660]: E1129 07:39:29.844212 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57bh588h74h7dhbh5dch86h5f4h56ch646h64fh57fh7fh77h558h564hf8hf6h5bfh58hd4h5fbh5dchcfhc6h656h655h58h8bh664h5dfh648q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdn9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-75b667c9-52pp8_openstack(3163df6e-6d10-47d4-b5b0-bd4cb4073e33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:29 crc kubenswrapper[4660]: E1129 07:39:29.858171 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-75b667c9-52pp8" podUID="3163df6e-6d10-47d4-b5b0-bd4cb4073e33" Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.925536 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-credential-keys\") pod \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.925682 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-fernet-keys\") pod \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.925731 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-config-data\") pod \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.925799 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgrdk\" (UniqueName: \"kubernetes.io/projected/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-kube-api-access-wgrdk\") pod \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.925897 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-combined-ca-bundle\") pod \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.925929 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-scripts\") pod \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\" (UID: \"edfa6c4b-ad09-4e8d-9c6b-8cb236398660\") " Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.931877 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-scripts" (OuterVolumeSpecName: "scripts") pod "edfa6c4b-ad09-4e8d-9c6b-8cb236398660" (UID: "edfa6c4b-ad09-4e8d-9c6b-8cb236398660"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.933834 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-kube-api-access-wgrdk" (OuterVolumeSpecName: "kube-api-access-wgrdk") pod "edfa6c4b-ad09-4e8d-9c6b-8cb236398660" (UID: "edfa6c4b-ad09-4e8d-9c6b-8cb236398660"). InnerVolumeSpecName "kube-api-access-wgrdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.933947 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "edfa6c4b-ad09-4e8d-9c6b-8cb236398660" (UID: "edfa6c4b-ad09-4e8d-9c6b-8cb236398660"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.947982 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "edfa6c4b-ad09-4e8d-9c6b-8cb236398660" (UID: "edfa6c4b-ad09-4e8d-9c6b-8cb236398660"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.954947 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edfa6c4b-ad09-4e8d-9c6b-8cb236398660" (UID: "edfa6c4b-ad09-4e8d-9c6b-8cb236398660"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:29 crc kubenswrapper[4660]: I1129 07:39:29.968254 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-config-data" (OuterVolumeSpecName: "config-data") pod "edfa6c4b-ad09-4e8d-9c6b-8cb236398660" (UID: "edfa6c4b-ad09-4e8d-9c6b-8cb236398660"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.028401 4660 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.028444 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.028525 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgrdk\" (UniqueName: \"kubernetes.io/projected/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-kube-api-access-wgrdk\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.028542 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.028554 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.028565 4660 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edfa6c4b-ad09-4e8d-9c6b-8cb236398660-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.047128 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fgbj9" event={"ID":"edfa6c4b-ad09-4e8d-9c6b-8cb236398660","Type":"ContainerDied","Data":"f5a2a42ea025b92c234184c531f58ab50bf05373d00bd59d888a6d25997d922f"} Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.047164 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5a2a42ea025b92c234184c531f58ab50bf05373d00bd59d888a6d25997d922f" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.047221 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fgbj9" Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.918857 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fgbj9"] Nov 29 07:39:30 crc kubenswrapper[4660]: I1129 07:39:30.933056 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fgbj9"] Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.066745 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wgw94"] Nov 29 07:39:31 crc kubenswrapper[4660]: E1129 07:39:31.067247 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edfa6c4b-ad09-4e8d-9c6b-8cb236398660" containerName="keystone-bootstrap" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.067285 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="edfa6c4b-ad09-4e8d-9c6b-8cb236398660" containerName="keystone-bootstrap" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.067687 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="edfa6c4b-ad09-4e8d-9c6b-8cb236398660" containerName="keystone-bootstrap" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.070285 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.076418 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.076567 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.076706 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.076777 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvsdd" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.076909 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.083975 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wgw94"] Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.145789 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-fernet-keys\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.145833 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42sh\" (UniqueName: \"kubernetes.io/projected/b2de8145-fcb6-4202-8aa4-85b696c9e69c-kube-api-access-b42sh\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.145876 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-config-data\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.146000 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.146079 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-credential-keys\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.146234 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-scripts\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.247997 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-fernet-keys\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.248049 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b42sh\" (UniqueName: \"kubernetes.io/projected/b2de8145-fcb6-4202-8aa4-85b696c9e69c-kube-api-access-b42sh\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.248091 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-config-data\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.248194 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.248237 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-credential-keys\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.248324 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-scripts\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.260532 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.262301 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-scripts\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.264003 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-config-data\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.264729 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-fernet-keys\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.267150 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b42sh\" (UniqueName: \"kubernetes.io/projected/b2de8145-fcb6-4202-8aa4-85b696c9e69c-kube-api-access-b42sh\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.268694 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-credential-keys\") pod \"keystone-bootstrap-wgw94\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.397899 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:39:31 crc kubenswrapper[4660]: I1129 07:39:31.705976 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edfa6c4b-ad09-4e8d-9c6b-8cb236398660" path="/var/lib/kubelet/pods/edfa6c4b-ad09-4e8d-9c6b-8cb236398660/volumes" Nov 29 07:39:33 crc kubenswrapper[4660]: I1129 07:39:33.521108 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 29 07:39:33 crc kubenswrapper[4660]: I1129 07:39:33.521220 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:39:38 crc kubenswrapper[4660]: I1129 07:39:38.521940 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 29 07:39:43 crc kubenswrapper[4660]: I1129 07:39:43.521841 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.062678 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.063112 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66hd6hbdh646h5f4h697h658h64fh66bh69hc9hfh5dchbbh5c6hc5h64dh646h5f8h64fh695hfbh5h584h64dh687h56ch5f8h598h686h555hf7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8dr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-76565fb74d-wgqb4_openstack(3b1c3a22-b3b7-4403-b4d5-263d822b3fab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.066351 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-76565fb74d-wgqb4" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.068254 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.068425 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6ch544h668h657hffhffh68dh99h678h67h65bhc8h66dhf7h59dh65fh599h74h649h5bhf6h66ch59chb9h5bh649h649h68h679hb4h54bhc5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fxpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d8477fd94-v56g5_openstack(953f9580-5907-45bf-ae44-e48149acc44c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.070651 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5d8477fd94-v56g5" podUID="953f9580-5907-45bf-ae44-e48149acc44c" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.142708 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.186520 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-m4n62" event={"ID":"6cc3b2aa-48e3-4989-8b86-a99155f3ee15","Type":"ContainerDied","Data":"2aaabd4d4b8ae846a2308cdf05162c8e4a863de2f1a73241b42c6059a787b127"} Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.186593 4660 scope.go:117] "RemoveContainer" containerID="2b9ecca7c07af6dee877caa709afa02dd08fb82001ad5d015207d683b98027aa" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.186593 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-m4n62" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.190900 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-76565fb74d-wgqb4" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.195667 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5d8477fd94-v56g5" podUID="953f9580-5907-45bf-ae44-e48149acc44c" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.316533 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-config\") pod \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.316746 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-svc\") pod \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.317007 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-nb\") pod \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.317069 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-sb\") pod \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.317090 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-swift-storage-0\") pod \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.317125 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqc5p\" (UniqueName: \"kubernetes.io/projected/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-kube-api-access-gqc5p\") pod \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\" (UID: \"6cc3b2aa-48e3-4989-8b86-a99155f3ee15\") " Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.325229 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-kube-api-access-gqc5p" (OuterVolumeSpecName: "kube-api-access-gqc5p") pod "6cc3b2aa-48e3-4989-8b86-a99155f3ee15" (UID: "6cc3b2aa-48e3-4989-8b86-a99155f3ee15"). InnerVolumeSpecName "kube-api-access-gqc5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.367648 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6cc3b2aa-48e3-4989-8b86-a99155f3ee15" (UID: "6cc3b2aa-48e3-4989-8b86-a99155f3ee15"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.378156 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6cc3b2aa-48e3-4989-8b86-a99155f3ee15" (UID: "6cc3b2aa-48e3-4989-8b86-a99155f3ee15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.403909 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6cc3b2aa-48e3-4989-8b86-a99155f3ee15" (UID: "6cc3b2aa-48e3-4989-8b86-a99155f3ee15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.408476 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-config" (OuterVolumeSpecName: "config") pod "6cc3b2aa-48e3-4989-8b86-a99155f3ee15" (UID: "6cc3b2aa-48e3-4989-8b86-a99155f3ee15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.409572 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cc3b2aa-48e3-4989-8b86-a99155f3ee15" (UID: "6cc3b2aa-48e3-4989-8b86-a99155f3ee15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.420495 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.420535 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.420548 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.420596 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqc5p\" (UniqueName: \"kubernetes.io/projected/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-kube-api-access-gqc5p\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.420692 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.420706 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc3b2aa-48e3-4989-8b86-a99155f3ee15-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.526484 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.526771 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fh6ch548h76h677h56bh666h5cbhbbhch689h57fh5bdh67fh666h669h65chc5h5bdh665h657h8dh68ch655h5cbhfh75hcchd9h545hbfhf8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwvf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6dc476b8c7-6svhd_openstack(aeb8043c-7084-41e3-95b0-03e6d70f02f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:45 crc kubenswrapper[4660]: E1129 07:39:45.529662 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6dc476b8c7-6svhd" podUID="aeb8043c-7084-41e3-95b0-03e6d70f02f3" Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.571191 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-m4n62"] Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.579376 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-m4n62"] Nov 29 07:39:45 crc kubenswrapper[4660]: I1129 07:39:45.708662 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc3b2aa-48e3-4989-8b86-a99155f3ee15" path="/var/lib/kubelet/pods/6cc3b2aa-48e3-4989-8b86-a99155f3ee15/volumes" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.388842 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:39:47 crc kubenswrapper[4660]: E1129 07:39:47.445120 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 29 07:39:47 crc kubenswrapper[4660]: E1129 07:39:47.445308 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h4hc9h79h568h576h57bh589h658h97h699h674hbdh56dh56dh66bh5d8h589h64dh697h596hb7h67ch88hd6h67ch66dh664h5cfh66dh5fbh657q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nbpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-848469c5f5-nsstg_openstack(ab21846a-6632-4068-b7d7-bd8ec0750a64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:47 crc kubenswrapper[4660]: E1129 07:39:47.448150 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-848469c5f5-nsstg" podUID="ab21846a-6632-4068-b7d7-bd8ec0750a64" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.463830 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-horizon-secret-key\") pod \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.465437 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-logs\") pod \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.465557 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdn9z\" (UniqueName: \"kubernetes.io/projected/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-kube-api-access-tdn9z\") pod \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.465732 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-config-data\") pod \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.465716 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-logs" (OuterVolumeSpecName: "logs") pod "3163df6e-6d10-47d4-b5b0-bd4cb4073e33" (UID: "3163df6e-6d10-47d4-b5b0-bd4cb4073e33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.465880 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-scripts\") pod \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\" (UID: \"3163df6e-6d10-47d4-b5b0-bd4cb4073e33\") " Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.466293 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-scripts" (OuterVolumeSpecName: "scripts") pod "3163df6e-6d10-47d4-b5b0-bd4cb4073e33" (UID: "3163df6e-6d10-47d4-b5b0-bd4cb4073e33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.466414 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-config-data" (OuterVolumeSpecName: "config-data") pod "3163df6e-6d10-47d4-b5b0-bd4cb4073e33" (UID: "3163df6e-6d10-47d4-b5b0-bd4cb4073e33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.466514 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.466530 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.466539 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.470973 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-kube-api-access-tdn9z" (OuterVolumeSpecName: "kube-api-access-tdn9z") pod "3163df6e-6d10-47d4-b5b0-bd4cb4073e33" (UID: "3163df6e-6d10-47d4-b5b0-bd4cb4073e33"). InnerVolumeSpecName "kube-api-access-tdn9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.472882 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3163df6e-6d10-47d4-b5b0-bd4cb4073e33" (UID: "3163df6e-6d10-47d4-b5b0-bd4cb4073e33"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.568692 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdn9z\" (UniqueName: \"kubernetes.io/projected/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-kube-api-access-tdn9z\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:47 crc kubenswrapper[4660]: I1129 07:39:47.568737 4660 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3163df6e-6d10-47d4-b5b0-bd4cb4073e33-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.210053 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75b667c9-52pp8" event={"ID":"3163df6e-6d10-47d4-b5b0-bd4cb4073e33","Type":"ContainerDied","Data":"7f9a19329989d93b5614b518ef74d174e2f66be5672ad354e1ffa5fcc5954d30"} Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.210192 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75b667c9-52pp8" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.305024 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75b667c9-52pp8"] Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.311147 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75b667c9-52pp8"] Nov 29 07:39:48 crc kubenswrapper[4660]: E1129 07:39:48.693920 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 29 07:39:48 crc kubenswrapper[4660]: E1129 07:39:48.694641 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5bsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7j498_openstack(e34f6bca-d788-40bf-9065-f7f331a8f8d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:48 crc kubenswrapper[4660]: E1129 07:39:48.695938 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7j498" podUID="e34f6bca-d788-40bf-9065-f7f331a8f8d9" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.787259 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.793271 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912565 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-nb\") pod \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912646 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-svc\") pod \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912670 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvf4\" (UniqueName: \"kubernetes.io/projected/aeb8043c-7084-41e3-95b0-03e6d70f02f3-kube-api-access-dwvf4\") pod \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912726 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-config-data\") pod \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912764 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb8043c-7084-41e3-95b0-03e6d70f02f3-logs\") pod \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912857 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw2wd\" (UniqueName: \"kubernetes.io/projected/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-kube-api-access-pw2wd\") pod \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912879 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-scripts\") pod \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912920 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-config\") pod \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912951 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-sb\") pod \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.912973 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-swift-storage-0\") pod \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\" (UID: \"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.913013 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeb8043c-7084-41e3-95b0-03e6d70f02f3-horizon-secret-key\") pod \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\" (UID: \"aeb8043c-7084-41e3-95b0-03e6d70f02f3\") " Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.915603 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-scripts" (OuterVolumeSpecName: "scripts") pod "aeb8043c-7084-41e3-95b0-03e6d70f02f3" (UID: "aeb8043c-7084-41e3-95b0-03e6d70f02f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.916486 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-config-data" (OuterVolumeSpecName: "config-data") pod "aeb8043c-7084-41e3-95b0-03e6d70f02f3" (UID: "aeb8043c-7084-41e3-95b0-03e6d70f02f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.917229 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb8043c-7084-41e3-95b0-03e6d70f02f3-logs" (OuterVolumeSpecName: "logs") pod "aeb8043c-7084-41e3-95b0-03e6d70f02f3" (UID: "aeb8043c-7084-41e3-95b0-03e6d70f02f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.919968 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb8043c-7084-41e3-95b0-03e6d70f02f3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aeb8043c-7084-41e3-95b0-03e6d70f02f3" (UID: "aeb8043c-7084-41e3-95b0-03e6d70f02f3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.921950 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-kube-api-access-pw2wd" (OuterVolumeSpecName: "kube-api-access-pw2wd") pod "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" (UID: "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f"). InnerVolumeSpecName "kube-api-access-pw2wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.948926 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb8043c-7084-41e3-95b0-03e6d70f02f3-kube-api-access-dwvf4" (OuterVolumeSpecName: "kube-api-access-dwvf4") pod "aeb8043c-7084-41e3-95b0-03e6d70f02f3" (UID: "aeb8043c-7084-41e3-95b0-03e6d70f02f3"). InnerVolumeSpecName "kube-api-access-dwvf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.979074 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-config" (OuterVolumeSpecName: "config") pod "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" (UID: "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.988773 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" (UID: "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:48 crc kubenswrapper[4660]: I1129 07:39:48.997864 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" (UID: "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.003850 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" (UID: "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015417 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015442 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015452 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvf4\" (UniqueName: \"kubernetes.io/projected/aeb8043c-7084-41e3-95b0-03e6d70f02f3-kube-api-access-dwvf4\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015461 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015469 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeb8043c-7084-41e3-95b0-03e6d70f02f3-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015477 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw2wd\" (UniqueName: \"kubernetes.io/projected/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-kube-api-access-pw2wd\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015487 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeb8043c-7084-41e3-95b0-03e6d70f02f3-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015496 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015503 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.015511 4660 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeb8043c-7084-41e3-95b0-03e6d70f02f3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.017005 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" (UID: "5ebdfbf0-79e1-4d9f-868d-cf129b0d139f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.116548 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.227139 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" event={"ID":"5ebdfbf0-79e1-4d9f-868d-cf129b0d139f","Type":"ContainerDied","Data":"b2c6dec7fb3e4eaf901045116c5b496d9f3114b526fcf2591748afa0142ba1a9"} Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.227376 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.233304 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dc476b8c7-6svhd" event={"ID":"aeb8043c-7084-41e3-95b0-03e6d70f02f3","Type":"ContainerDied","Data":"85264f5865adfb58078ca8b5dbaa59a28dda77a9b8320890edf911956715a5cd"} Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.233757 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dc476b8c7-6svhd" Nov 29 07:39:49 crc kubenswrapper[4660]: E1129 07:39:49.236199 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7j498" podUID="e34f6bca-d788-40bf-9065-f7f331a8f8d9" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.327373 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dc476b8c7-6svhd"] Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.337039 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6dc476b8c7-6svhd"] Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.343685 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z265m"] Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.350280 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z265m"] Nov 29 07:39:49 crc kubenswrapper[4660]: E1129 07:39:49.574422 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 29 07:39:49 crc kubenswrapper[4660]: E1129 07:39:49.574898 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h565h5f9hd5h65h654h655h545h5cfh5fdh56dh5bhb7hf7h5b9h5dbh7bh679h5c6h9bh598h5cch68ch5dfh5f9h5bbhf7h67bh595h5bdh564h64dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgl55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(994934b0-1ed3-4a63-b231-34e923c9a2ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.707111 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3163df6e-6d10-47d4-b5b0-bd4cb4073e33" path="/var/lib/kubelet/pods/3163df6e-6d10-47d4-b5b0-bd4cb4073e33/volumes" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.707553 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" path="/var/lib/kubelet/pods/5ebdfbf0-79e1-4d9f-868d-cf129b0d139f/volumes" Nov 29 07:39:49 crc kubenswrapper[4660]: I1129 07:39:49.708385 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb8043c-7084-41e3-95b0-03e6d70f02f3" path="/var/lib/kubelet/pods/aeb8043c-7084-41e3-95b0-03e6d70f02f3/volumes" Nov 29 07:39:51 crc kubenswrapper[4660]: E1129 07:39:51.078101 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 29 07:39:51 crc kubenswrapper[4660]: E1129 07:39:51.078409 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt8rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-vxrqr_openstack(a8e0c494-1877-49d7-8877-308fb75d13b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:39:51 crc kubenswrapper[4660]: E1129 07:39:51.079641 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-vxrqr" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.352602 4660 scope.go:117] "RemoveContainer" containerID="7e109ebefa2a2c56a60bc1689cddf515bbb0ab72f61e7d44b92bafd07838e73e" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.484694 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.524222 4660 scope.go:117] "RemoveContainer" containerID="e478e9c8920e067c72f84f2159aa1ca2bc398fd2dba37679d3e6c48d8ecca44a" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.557229 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nbpr\" (UniqueName: \"kubernetes.io/projected/ab21846a-6632-4068-b7d7-bd8ec0750a64-kube-api-access-2nbpr\") pod \"ab21846a-6632-4068-b7d7-bd8ec0750a64\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.557404 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-scripts\") pod \"ab21846a-6632-4068-b7d7-bd8ec0750a64\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.557484 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab21846a-6632-4068-b7d7-bd8ec0750a64-horizon-secret-key\") pod \"ab21846a-6632-4068-b7d7-bd8ec0750a64\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.557557 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab21846a-6632-4068-b7d7-bd8ec0750a64-logs\") pod \"ab21846a-6632-4068-b7d7-bd8ec0750a64\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.557698 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-config-data\") pod \"ab21846a-6632-4068-b7d7-bd8ec0750a64\" (UID: \"ab21846a-6632-4068-b7d7-bd8ec0750a64\") " Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.558029 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-scripts" (OuterVolumeSpecName: "scripts") pod "ab21846a-6632-4068-b7d7-bd8ec0750a64" (UID: "ab21846a-6632-4068-b7d7-bd8ec0750a64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.558302 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab21846a-6632-4068-b7d7-bd8ec0750a64-logs" (OuterVolumeSpecName: "logs") pod "ab21846a-6632-4068-b7d7-bd8ec0750a64" (UID: "ab21846a-6632-4068-b7d7-bd8ec0750a64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.560000 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.560023 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab21846a-6632-4068-b7d7-bd8ec0750a64-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.562002 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-config-data" (OuterVolumeSpecName: "config-data") pod "ab21846a-6632-4068-b7d7-bd8ec0750a64" (UID: "ab21846a-6632-4068-b7d7-bd8ec0750a64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.604498 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab21846a-6632-4068-b7d7-bd8ec0750a64-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ab21846a-6632-4068-b7d7-bd8ec0750a64" (UID: "ab21846a-6632-4068-b7d7-bd8ec0750a64"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.604686 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab21846a-6632-4068-b7d7-bd8ec0750a64-kube-api-access-2nbpr" (OuterVolumeSpecName: "kube-api-access-2nbpr") pod "ab21846a-6632-4068-b7d7-bd8ec0750a64" (UID: "ab21846a-6632-4068-b7d7-bd8ec0750a64"). InnerVolumeSpecName "kube-api-access-2nbpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.661887 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nbpr\" (UniqueName: \"kubernetes.io/projected/ab21846a-6632-4068-b7d7-bd8ec0750a64-kube-api-access-2nbpr\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.662152 4660 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab21846a-6632-4068-b7d7-bd8ec0750a64-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.662165 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab21846a-6632-4068-b7d7-bd8ec0750a64-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:39:51 crc kubenswrapper[4660]: I1129 07:39:51.686891 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wgw94"] Nov 29 07:39:51 crc kubenswrapper[4660]: W1129 07:39:51.695560 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2de8145_fcb6_4202_8aa4_85b696c9e69c.slice/crio-ab340129edbb654e0f9cbec414450128710c73fd6ed7700c4bd8c2027d8f0a7f WatchSource:0}: Error finding container ab340129edbb654e0f9cbec414450128710c73fd6ed7700c4bd8c2027d8f0a7f: Status 404 returned error can't find the container with id ab340129edbb654e0f9cbec414450128710c73fd6ed7700c4bd8c2027d8f0a7f Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.255934 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgw94" event={"ID":"b2de8145-fcb6-4202-8aa4-85b696c9e69c","Type":"ContainerStarted","Data":"3123a66dbb851c07c414d2440ddfabf49c61379267a64fb3c92118d02a764047"} Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.256686 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgw94" event={"ID":"b2de8145-fcb6-4202-8aa4-85b696c9e69c","Type":"ContainerStarted","Data":"ab340129edbb654e0f9cbec414450128710c73fd6ed7700c4bd8c2027d8f0a7f"} Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.257626 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-848469c5f5-nsstg" event={"ID":"ab21846a-6632-4068-b7d7-bd8ec0750a64","Type":"ContainerDied","Data":"24dccd745dc813cb8df811d6777cb4106b01a08d9846e4a7467fd6afe990f68a"} Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.257648 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-848469c5f5-nsstg" Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.258816 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wcwr9" event={"ID":"07f9cecd-58f3-4e48-acfc-6de8cce380df","Type":"ContainerStarted","Data":"b0ace9d78f96995af6c400c92ef2053d41212469653a67b0f15650f0c0070cf1"} Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.280100 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wgw94" podStartSLOduration=21.280077341 podStartE2EDuration="21.280077341s" podCreationTimestamp="2025-11-29 07:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:39:52.273899491 +0000 UTC m=+1482.827429400" watchObservedRunningTime="2025-11-29 07:39:52.280077341 +0000 UTC m=+1482.833607240" Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.319250 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-848469c5f5-nsstg"] Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.335623 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-848469c5f5-nsstg"] Nov 29 07:39:52 crc kubenswrapper[4660]: I1129 07:39:52.345649 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wcwr9" podStartSLOduration=11.828938565 podStartE2EDuration="1m5.345627225s" podCreationTimestamp="2025-11-29 07:38:47 +0000 UTC" firstStartedPulling="2025-11-29 07:38:57.878407482 +0000 UTC m=+1428.431937381" lastFinishedPulling="2025-11-29 07:39:51.395096122 +0000 UTC m=+1481.948626041" observedRunningTime="2025-11-29 07:39:52.324937386 +0000 UTC m=+1482.878467285" watchObservedRunningTime="2025-11-29 07:39:52.345627225 +0000 UTC m=+1482.899157124" Nov 29 07:39:53 crc kubenswrapper[4660]: I1129 07:39:53.521710 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-z265m" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Nov 29 07:39:53 crc kubenswrapper[4660]: I1129 07:39:53.705002 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab21846a-6632-4068-b7d7-bd8ec0750a64" path="/var/lib/kubelet/pods/ab21846a-6632-4068-b7d7-bd8ec0750a64/volumes" Nov 29 07:39:57 crc kubenswrapper[4660]: I1129 07:39:57.299360 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerStarted","Data":"4e1b4c70933dd9a88006812ea0df83f5430cdc4a139b486ce5fc77b6b709e8b3"} Nov 29 07:40:01 crc kubenswrapper[4660]: I1129 07:40:01.337596 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d8477fd94-v56g5" event={"ID":"953f9580-5907-45bf-ae44-e48149acc44c","Type":"ContainerStarted","Data":"8ffd382cb0afc4a1a155c3f9cc51547cff02ed39a9f4fbfd2f839b988bf5723a"} Nov 29 07:40:02 crc kubenswrapper[4660]: I1129 07:40:02.350674 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d8477fd94-v56g5" event={"ID":"953f9580-5907-45bf-ae44-e48149acc44c","Type":"ContainerStarted","Data":"a9f347d2fb13f9e2515878bb6c83a1d090a5ecf14db01c3002b6fb457c342a5c"} Nov 29 07:40:02 crc kubenswrapper[4660]: I1129 07:40:02.378471 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5d8477fd94-v56g5" podStartSLOduration=3.468738145 podStartE2EDuration="1m6.378449517s" podCreationTimestamp="2025-11-29 07:38:56 +0000 UTC" firstStartedPulling="2025-11-29 07:38:57.927113082 +0000 UTC m=+1428.480642981" lastFinishedPulling="2025-11-29 07:40:00.836824444 +0000 UTC m=+1491.390354353" observedRunningTime="2025-11-29 07:40:02.37200172 +0000 UTC m=+1492.925531619" watchObservedRunningTime="2025-11-29 07:40:02.378449517 +0000 UTC m=+1492.931979416" Nov 29 07:40:03 crc kubenswrapper[4660]: I1129 07:40:03.364581 4660 generic.go:334] "Generic (PLEG): container finished" podID="b2de8145-fcb6-4202-8aa4-85b696c9e69c" containerID="3123a66dbb851c07c414d2440ddfabf49c61379267a64fb3c92118d02a764047" exitCode=0 Nov 29 07:40:03 crc kubenswrapper[4660]: I1129 07:40:03.364761 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgw94" event={"ID":"b2de8145-fcb6-4202-8aa4-85b696c9e69c","Type":"ContainerDied","Data":"3123a66dbb851c07c414d2440ddfabf49c61379267a64fb3c92118d02a764047"} Nov 29 07:40:04 crc kubenswrapper[4660]: E1129 07:40:04.694828 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-vxrqr" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.756581 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.914173 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-fernet-keys\") pod \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.914264 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-scripts\") pod \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.914357 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b42sh\" (UniqueName: \"kubernetes.io/projected/b2de8145-fcb6-4202-8aa4-85b696c9e69c-kube-api-access-b42sh\") pod \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.914396 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-credential-keys\") pod \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.914448 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-config-data\") pod \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.914523 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle\") pod \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.920577 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b2de8145-fcb6-4202-8aa4-85b696c9e69c" (UID: "b2de8145-fcb6-4202-8aa4-85b696c9e69c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.920695 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-scripts" (OuterVolumeSpecName: "scripts") pod "b2de8145-fcb6-4202-8aa4-85b696c9e69c" (UID: "b2de8145-fcb6-4202-8aa4-85b696c9e69c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.921819 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2de8145-fcb6-4202-8aa4-85b696c9e69c-kube-api-access-b42sh" (OuterVolumeSpecName: "kube-api-access-b42sh") pod "b2de8145-fcb6-4202-8aa4-85b696c9e69c" (UID: "b2de8145-fcb6-4202-8aa4-85b696c9e69c"). InnerVolumeSpecName "kube-api-access-b42sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.921908 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b2de8145-fcb6-4202-8aa4-85b696c9e69c" (UID: "b2de8145-fcb6-4202-8aa4-85b696c9e69c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:05 crc kubenswrapper[4660]: E1129 07:40:05.941178 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle podName:b2de8145-fcb6-4202-8aa4-85b696c9e69c nodeName:}" failed. No retries permitted until 2025-11-29 07:40:06.44111666 +0000 UTC m=+1496.994646559 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle") pod "b2de8145-fcb6-4202-8aa4-85b696c9e69c" (UID: "b2de8145-fcb6-4202-8aa4-85b696c9e69c") : error deleting /var/lib/kubelet/pods/b2de8145-fcb6-4202-8aa4-85b696c9e69c/volume-subpaths: remove /var/lib/kubelet/pods/b2de8145-fcb6-4202-8aa4-85b696c9e69c/volume-subpaths: no such file or directory Nov 29 07:40:05 crc kubenswrapper[4660]: I1129 07:40:05.944188 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-config-data" (OuterVolumeSpecName: "config-data") pod "b2de8145-fcb6-4202-8aa4-85b696c9e69c" (UID: "b2de8145-fcb6-4202-8aa4-85b696c9e69c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.016472 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b42sh\" (UniqueName: \"kubernetes.io/projected/b2de8145-fcb6-4202-8aa4-85b696c9e69c-kube-api-access-b42sh\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.016516 4660 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.016529 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.016546 4660 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.016576 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.397621 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgw94" event={"ID":"b2de8145-fcb6-4202-8aa4-85b696c9e69c","Type":"ContainerDied","Data":"ab340129edbb654e0f9cbec414450128710c73fd6ed7700c4bd8c2027d8f0a7f"} Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.397900 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab340129edbb654e0f9cbec414450128710c73fd6ed7700c4bd8c2027d8f0a7f" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.397689 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgw94" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.524433 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle\") pod \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\" (UID: \"b2de8145-fcb6-4202-8aa4-85b696c9e69c\") " Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.529475 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2de8145-fcb6-4202-8aa4-85b696c9e69c" (UID: "b2de8145-fcb6-4202-8aa4-85b696c9e69c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.626952 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de8145-fcb6-4202-8aa4-85b696c9e69c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.969965 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54fd458c48-wjcjs"] Nov 29 07:40:06 crc kubenswrapper[4660]: E1129 07:40:06.970544 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="init" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.970556 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="init" Nov 29 07:40:06 crc kubenswrapper[4660]: E1129 07:40:06.970579 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.970586 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" Nov 29 07:40:06 crc kubenswrapper[4660]: E1129 07:40:06.970599 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc3b2aa-48e3-4989-8b86-a99155f3ee15" containerName="init" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.970605 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc3b2aa-48e3-4989-8b86-a99155f3ee15" containerName="init" Nov 29 07:40:06 crc kubenswrapper[4660]: E1129 07:40:06.970639 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2de8145-fcb6-4202-8aa4-85b696c9e69c" containerName="keystone-bootstrap" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.970645 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2de8145-fcb6-4202-8aa4-85b696c9e69c" containerName="keystone-bootstrap" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.970813 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc3b2aa-48e3-4989-8b86-a99155f3ee15" containerName="init" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.970832 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebdfbf0-79e1-4d9f-868d-cf129b0d139f" containerName="dnsmasq-dns" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.970847 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2de8145-fcb6-4202-8aa4-85b696c9e69c" containerName="keystone-bootstrap" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.971395 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.976314 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.976321 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.976477 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.976728 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rvsdd" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.985236 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.985311 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 29 07:40:06 crc kubenswrapper[4660]: I1129 07:40:06.999593 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54fd458c48-wjcjs"] Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.135873 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-fernet-keys\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.135931 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-internal-tls-certs\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.136143 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-credential-keys\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.136283 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-scripts\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.136400 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-combined-ca-bundle\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.136517 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-public-tls-certs\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.136670 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hf8\" (UniqueName: \"kubernetes.io/projected/4696d01f-aadd-46fb-b966-f67035bb6ba4-kube-api-access-c5hf8\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.136746 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-config-data\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239099 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-internal-tls-certs\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239209 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-credential-keys\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239247 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-scripts\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239285 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-combined-ca-bundle\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239321 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-public-tls-certs\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239372 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5hf8\" (UniqueName: \"kubernetes.io/projected/4696d01f-aadd-46fb-b966-f67035bb6ba4-kube-api-access-c5hf8\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239407 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-config-data\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.239430 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-fernet-keys\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.243552 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-public-tls-certs\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.243684 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-fernet-keys\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.244246 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-scripts\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.244934 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-config-data\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.245285 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-credential-keys\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.246588 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-combined-ca-bundle\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.255547 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4696d01f-aadd-46fb-b966-f67035bb6ba4-internal-tls-certs\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.272756 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5hf8\" (UniqueName: \"kubernetes.io/projected/4696d01f-aadd-46fb-b966-f67035bb6ba4-kube-api-access-c5hf8\") pod \"keystone-54fd458c48-wjcjs\" (UID: \"4696d01f-aadd-46fb-b966-f67035bb6ba4\") " pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.289883 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.320844 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:40:07 crc kubenswrapper[4660]: I1129 07:40:07.320895 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:40:09 crc kubenswrapper[4660]: I1129 07:40:09.489443 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54fd458c48-wjcjs"] Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.431879 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7j498" event={"ID":"e34f6bca-d788-40bf-9065-f7f331a8f8d9","Type":"ContainerStarted","Data":"e6116c7794fb2072c876f0816b27f85bb269b082e8112814cb8f50c033164b46"} Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.435781 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54fd458c48-wjcjs" event={"ID":"4696d01f-aadd-46fb-b966-f67035bb6ba4","Type":"ContainerStarted","Data":"7fe435d2171f8c7082855503f97d7bbba787d94bbe367c67abe0b0e2f71f8b74"} Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.435826 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54fd458c48-wjcjs" event={"ID":"4696d01f-aadd-46fb-b966-f67035bb6ba4","Type":"ContainerStarted","Data":"7ca9e46d1ef491650d9c0abf9bffc6c5caea04737d84bb08919097a8b30b0407"} Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.436203 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.440986 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76565fb74d-wgqb4" event={"ID":"3b1c3a22-b3b7-4403-b4d5-263d822b3fab","Type":"ContainerStarted","Data":"156c64ed6d999d268ab91dc231927009499a9200a4ba906f1bf8c8a8b4315a1f"} Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.441020 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76565fb74d-wgqb4" event={"ID":"3b1c3a22-b3b7-4403-b4d5-263d822b3fab","Type":"ContainerStarted","Data":"a06fa5bc5dea81f87eb50d48ff9fc0f67ec231eb279da20f46810fc9e7f222f0"} Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.442940 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerStarted","Data":"b5a924b64e86c5a613c2da804af848d209c5829c1269d5898873185400f9fe2a"} Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.461430 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7j498" podStartSLOduration=12.330177252 podStartE2EDuration="1m23.461356528s" podCreationTimestamp="2025-11-29 07:38:47 +0000 UTC" firstStartedPulling="2025-11-29 07:38:57.876976943 +0000 UTC m=+1428.430506842" lastFinishedPulling="2025-11-29 07:40:09.008156209 +0000 UTC m=+1499.561686118" observedRunningTime="2025-11-29 07:40:10.453330507 +0000 UTC m=+1501.006860396" watchObservedRunningTime="2025-11-29 07:40:10.461356528 +0000 UTC m=+1501.014886437" Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.485187 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54fd458c48-wjcjs" podStartSLOduration=4.485157733 podStartE2EDuration="4.485157733s" podCreationTimestamp="2025-11-29 07:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:10.476732611 +0000 UTC m=+1501.030262500" watchObservedRunningTime="2025-11-29 07:40:10.485157733 +0000 UTC m=+1501.038687632" Nov 29 07:40:10 crc kubenswrapper[4660]: I1129 07:40:10.500688 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76565fb74d-wgqb4" podStartSLOduration=3.5923079060000003 podStartE2EDuration="1m14.50066228s" podCreationTimestamp="2025-11-29 07:38:56 +0000 UTC" firstStartedPulling="2025-11-29 07:38:58.08906987 +0000 UTC m=+1428.642599769" lastFinishedPulling="2025-11-29 07:40:08.997424234 +0000 UTC m=+1499.550954143" observedRunningTime="2025-11-29 07:40:10.495711834 +0000 UTC m=+1501.049241753" watchObservedRunningTime="2025-11-29 07:40:10.50066228 +0000 UTC m=+1501.054192179" Nov 29 07:40:13 crc kubenswrapper[4660]: I1129 07:40:13.486888 4660 generic.go:334] "Generic (PLEG): container finished" podID="07f9cecd-58f3-4e48-acfc-6de8cce380df" containerID="b0ace9d78f96995af6c400c92ef2053d41212469653a67b0f15650f0c0070cf1" exitCode=0 Nov 29 07:40:13 crc kubenswrapper[4660]: I1129 07:40:13.486959 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wcwr9" event={"ID":"07f9cecd-58f3-4e48-acfc-6de8cce380df","Type":"ContainerDied","Data":"b0ace9d78f96995af6c400c92ef2053d41212469653a67b0f15650f0c0070cf1"} Nov 29 07:40:16 crc kubenswrapper[4660]: I1129 07:40:16.517659 4660 generic.go:334] "Generic (PLEG): container finished" podID="e34f6bca-d788-40bf-9065-f7f331a8f8d9" containerID="e6116c7794fb2072c876f0816b27f85bb269b082e8112814cb8f50c033164b46" exitCode=0 Nov 29 07:40:16 crc kubenswrapper[4660]: I1129 07:40:16.517888 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7j498" event={"ID":"e34f6bca-d788-40bf-9065-f7f331a8f8d9","Type":"ContainerDied","Data":"e6116c7794fb2072c876f0816b27f85bb269b082e8112814cb8f50c033164b46"} Nov 29 07:40:17 crc kubenswrapper[4660]: I1129 07:40:17.166669 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:40:17 crc kubenswrapper[4660]: I1129 07:40:17.166851 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:40:17 crc kubenswrapper[4660]: I1129 07:40:17.323029 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5d8477fd94-v56g5" podUID="953f9580-5907-45bf-ae44-e48149acc44c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.770880 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.776499 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7j498" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.947913 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-db-sync-config-data\") pod \"07f9cecd-58f3-4e48-acfc-6de8cce380df\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948228 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e34f6bca-d788-40bf-9065-f7f331a8f8d9-etc-machine-id\") pod \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948291 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-db-sync-config-data\") pod \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948339 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-combined-ca-bundle\") pod \"07f9cecd-58f3-4e48-acfc-6de8cce380df\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948358 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e34f6bca-d788-40bf-9065-f7f331a8f8d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e34f6bca-d788-40bf-9065-f7f331a8f8d9" (UID: "e34f6bca-d788-40bf-9065-f7f331a8f8d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948387 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5bsx\" (UniqueName: \"kubernetes.io/projected/e34f6bca-d788-40bf-9065-f7f331a8f8d9-kube-api-access-t5bsx\") pod \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948422 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-config-data\") pod \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948506 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-combined-ca-bundle\") pod \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948542 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dflgj\" (UniqueName: \"kubernetes.io/projected/07f9cecd-58f3-4e48-acfc-6de8cce380df-kube-api-access-dflgj\") pod \"07f9cecd-58f3-4e48-acfc-6de8cce380df\" (UID: \"07f9cecd-58f3-4e48-acfc-6de8cce380df\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.948635 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-scripts\") pod \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\" (UID: \"e34f6bca-d788-40bf-9065-f7f331a8f8d9\") " Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.949051 4660 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e34f6bca-d788-40bf-9065-f7f331a8f8d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.953592 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34f6bca-d788-40bf-9065-f7f331a8f8d9-kube-api-access-t5bsx" (OuterVolumeSpecName: "kube-api-access-t5bsx") pod "e34f6bca-d788-40bf-9065-f7f331a8f8d9" (UID: "e34f6bca-d788-40bf-9065-f7f331a8f8d9"). InnerVolumeSpecName "kube-api-access-t5bsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.954311 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "07f9cecd-58f3-4e48-acfc-6de8cce380df" (UID: "07f9cecd-58f3-4e48-acfc-6de8cce380df"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.954530 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-scripts" (OuterVolumeSpecName: "scripts") pod "e34f6bca-d788-40bf-9065-f7f331a8f8d9" (UID: "e34f6bca-d788-40bf-9065-f7f331a8f8d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.958309 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f9cecd-58f3-4e48-acfc-6de8cce380df-kube-api-access-dflgj" (OuterVolumeSpecName: "kube-api-access-dflgj") pod "07f9cecd-58f3-4e48-acfc-6de8cce380df" (UID: "07f9cecd-58f3-4e48-acfc-6de8cce380df"). InnerVolumeSpecName "kube-api-access-dflgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.971621 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e34f6bca-d788-40bf-9065-f7f331a8f8d9" (UID: "e34f6bca-d788-40bf-9065-f7f331a8f8d9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:18 crc kubenswrapper[4660]: I1129 07:40:18.985652 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07f9cecd-58f3-4e48-acfc-6de8cce380df" (UID: "07f9cecd-58f3-4e48-acfc-6de8cce380df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.002344 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e34f6bca-d788-40bf-9065-f7f331a8f8d9" (UID: "e34f6bca-d788-40bf-9065-f7f331a8f8d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.031969 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-config-data" (OuterVolumeSpecName: "config-data") pod "e34f6bca-d788-40bf-9065-f7f331a8f8d9" (UID: "e34f6bca-d788-40bf-9065-f7f331a8f8d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050576 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050621 4660 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050634 4660 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050642 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f9cecd-58f3-4e48-acfc-6de8cce380df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050652 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5bsx\" (UniqueName: \"kubernetes.io/projected/e34f6bca-d788-40bf-9065-f7f331a8f8d9-kube-api-access-t5bsx\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050660 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050668 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34f6bca-d788-40bf-9065-f7f331a8f8d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.050676 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dflgj\" (UniqueName: \"kubernetes.io/projected/07f9cecd-58f3-4e48-acfc-6de8cce380df-kube-api-access-dflgj\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.548634 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7j498" event={"ID":"e34f6bca-d788-40bf-9065-f7f331a8f8d9","Type":"ContainerDied","Data":"d8c9c9b8bbe607b4ef0f9d3d50556852d1c45293cb50b71cc994e753c6243e17"} Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.548675 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c9c9b8bbe607b4ef0f9d3d50556852d1c45293cb50b71cc994e753c6243e17" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.548706 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7j498" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.550565 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wcwr9" event={"ID":"07f9cecd-58f3-4e48-acfc-6de8cce380df","Type":"ContainerDied","Data":"39022821ff62d50f13cf3db525db1532ffe8b9ceca48582871162536efedda61"} Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.550590 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wcwr9" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.550595 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39022821ff62d50f13cf3db525db1532ffe8b9ceca48582871162536efedda61" Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.552290 4660 generic.go:334] "Generic (PLEG): container finished" podID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" containerID="ec17e27d5188783f27817fc57273a36da1772c39c9c40da96513f83e73893bb2" exitCode=0 Nov 29 07:40:19 crc kubenswrapper[4660]: I1129 07:40:19.552334 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gcb27" event={"ID":"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf","Type":"ContainerDied","Data":"ec17e27d5188783f27817fc57273a36da1772c39c9c40da96513f83e73893bb2"} Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.098931 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f844c8dbc-j8g6j"] Nov 29 07:40:20 crc kubenswrapper[4660]: E1129 07:40:20.106754 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34f6bca-d788-40bf-9065-f7f331a8f8d9" containerName="cinder-db-sync" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.106793 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34f6bca-d788-40bf-9065-f7f331a8f8d9" containerName="cinder-db-sync" Nov 29 07:40:20 crc kubenswrapper[4660]: E1129 07:40:20.106840 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f9cecd-58f3-4e48-acfc-6de8cce380df" containerName="barbican-db-sync" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.106846 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f9cecd-58f3-4e48-acfc-6de8cce380df" containerName="barbican-db-sync" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.107094 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34f6bca-d788-40bf-9065-f7f331a8f8d9" containerName="cinder-db-sync" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.107109 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f9cecd-58f3-4e48-acfc-6de8cce380df" containerName="barbican-db-sync" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.108113 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.123685 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f844c8dbc-j8g6j"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.128154 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-57df7" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.128426 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.134512 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.139737 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.144413 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.154483 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.154633 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.165465 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.165675 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2nmzh" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.319684 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.321744 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.321808 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-config-data\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.321845 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f5216f-e274-4987-b2cc-98effb9661eb-logs\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.321870 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.321904 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a829b0a-ecfa-4804-9614-7db77030e07c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.321973 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-combined-ca-bundle\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.322002 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.322034 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-config-data-custom\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.322059 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7bh4\" (UniqueName: \"kubernetes.io/projected/6a829b0a-ecfa-4804-9614-7db77030e07c-kube-api-access-g7bh4\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.322083 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4g9t\" (UniqueName: \"kubernetes.io/projected/b1f5216f-e274-4987-b2cc-98effb9661eb-kube-api-access-w4g9t\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.322109 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.359560 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65db494558-68jff"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.361303 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.369127 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.369520 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-b2krk"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.371022 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.381692 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65db494558-68jff"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.387565 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-b2krk"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.438961 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ca5104-7d38-40bc-aa55-19dbd28b40f3-logs\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439064 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439098 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-config-data\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439151 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-config-data\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439207 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f5216f-e274-4987-b2cc-98effb9661eb-logs\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439239 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439289 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a829b0a-ecfa-4804-9614-7db77030e07c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439354 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6sf\" (UniqueName: \"kubernetes.io/projected/25ca5104-7d38-40bc-aa55-19dbd28b40f3-kube-api-access-7b6sf\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439448 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-combined-ca-bundle\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439476 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439500 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-combined-ca-bundle\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439531 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-config-data-custom\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439558 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-config-data-custom\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439584 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7bh4\" (UniqueName: \"kubernetes.io/projected/6a829b0a-ecfa-4804-9614-7db77030e07c-kube-api-access-g7bh4\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439624 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4g9t\" (UniqueName: \"kubernetes.io/projected/b1f5216f-e274-4987-b2cc-98effb9661eb-kube-api-access-w4g9t\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.439656 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.440085 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a829b0a-ecfa-4804-9614-7db77030e07c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.440359 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f5216f-e274-4987-b2cc-98effb9661eb-logs\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.455559 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-combined-ca-bundle\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.472223 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-config-data-custom\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.488195 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4g9t\" (UniqueName: \"kubernetes.io/projected/b1f5216f-e274-4987-b2cc-98effb9661eb-kube-api-access-w4g9t\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.488241 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.488242 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.488814 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7bh4\" (UniqueName: \"kubernetes.io/projected/6a829b0a-ecfa-4804-9614-7db77030e07c-kube-api-access-g7bh4\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.489574 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.490266 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.492855 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f5216f-e274-4987-b2cc-98effb9661eb-config-data\") pod \"barbican-worker-f844c8dbc-j8g6j\" (UID: \"b1f5216f-e274-4987-b2cc-98effb9661eb\") " pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.518786 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f844c8dbc-j8g6j" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.541571 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-svc\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.541820 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b6sf\" (UniqueName: \"kubernetes.io/projected/25ca5104-7d38-40bc-aa55-19dbd28b40f3-kube-api-access-7b6sf\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.541902 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542004 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-combined-ca-bundle\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542083 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxstv\" (UniqueName: \"kubernetes.io/projected/736e39ee-a914-407d-a11e-fd81a8ba7679-kube-api-access-vxstv\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542154 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-config-data-custom\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542230 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542306 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ca5104-7d38-40bc-aa55-19dbd28b40f3-logs\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542370 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-swift-storage-0\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542457 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-config\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.542519 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-config-data\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.546308 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ca5104-7d38-40bc-aa55-19dbd28b40f3-logs\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.551079 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.552987 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-config-data\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.555059 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-combined-ca-bundle\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.561091 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ca5104-7d38-40bc-aa55-19dbd28b40f3-config-data-custom\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.569476 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9c946b766-6n2bk"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.571106 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.575903 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.583376 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9c946b766-6n2bk"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.597366 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-b2krk"] Nov 29 07:40:20 crc kubenswrapper[4660]: E1129 07:40:20.598027 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-vxstv ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" podUID="736e39ee-a914-407d-a11e-fd81a8ba7679" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.610463 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b6sf\" (UniqueName: \"kubernetes.io/projected/25ca5104-7d38-40bc-aa55-19dbd28b40f3-kube-api-access-7b6sf\") pod \"barbican-keystone-listener-65db494558-68jff\" (UID: \"25ca5104-7d38-40bc-aa55-19dbd28b40f3\") " pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.644756 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-config\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.644890 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-svc\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.644973 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.645008 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxstv\" (UniqueName: \"kubernetes.io/projected/736e39ee-a914-407d-a11e-fd81a8ba7679-kube-api-access-vxstv\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.645053 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.645079 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-swift-storage-0\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.646168 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-swift-storage-0\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.646275 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.647244 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.647340 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-svc\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.647397 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-config\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.675401 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxstv\" (UniqueName: \"kubernetes.io/projected/736e39ee-a914-407d-a11e-fd81a8ba7679-kube-api-access-vxstv\") pod \"dnsmasq-dns-5bdb97b9f9-b2krk\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.709689 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65db494558-68jff" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.711212 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-9qc5w"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.713100 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.737638 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-9qc5w"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.748166 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data-custom\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.748232 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-combined-ca-bundle\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.748252 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxdv\" (UniqueName: \"kubernetes.io/projected/83bbeb12-7456-4c22-8d8d-06e569201498-kube-api-access-zvxdv\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.748326 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83bbeb12-7456-4c22-8d8d-06e569201498-logs\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.748357 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.794687 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.796571 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.802362 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.819556 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.852748 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.852820 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-combined-ca-bundle\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.852855 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvxdv\" (UniqueName: \"kubernetes.io/projected/83bbeb12-7456-4c22-8d8d-06e569201498-kube-api-access-zvxdv\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.852935 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.852978 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-svc\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.853047 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftd7w\" (UniqueName: \"kubernetes.io/projected/4e38826a-73aa-428f-a6f7-a18b0184a18a-kube-api-access-ftd7w\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.891283 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-config\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.891425 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83bbeb12-7456-4c22-8d8d-06e569201498-logs\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.891547 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.891683 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data-custom\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.891759 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.892418 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83bbeb12-7456-4c22-8d8d-06e569201498-logs\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.905337 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-combined-ca-bundle\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: I1129 07:40:20.934011 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data-custom\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:20 crc kubenswrapper[4660]: E1129 07:40:20.934296 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.007036 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-svc\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.011862 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-627dx\" (UniqueName: \"kubernetes.io/projected/62c8714f-6773-4587-a147-350224a81832-kube-api-access-627dx\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.011891 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftd7w\" (UniqueName: \"kubernetes.io/projected/4e38826a-73aa-428f-a6f7-a18b0184a18a-kube-api-access-ftd7w\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.011935 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-config\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.011967 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data-custom\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012181 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012216 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012234 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c8714f-6773-4587-a147-350224a81832-logs\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012280 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-scripts\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012370 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62c8714f-6773-4587-a147-350224a81832-etc-machine-id\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.012437 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.013187 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.013712 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-config\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.014267 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.016447 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.024569 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-svc\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.029263 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.046356 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvxdv\" (UniqueName: \"kubernetes.io/projected/83bbeb12-7456-4c22-8d8d-06e569201498-kube-api-access-zvxdv\") pod \"barbican-api-9c946b766-6n2bk\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.054122 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftd7w\" (UniqueName: \"kubernetes.io/projected/4e38826a-73aa-428f-a6f7-a18b0184a18a-kube-api-access-ftd7w\") pod \"dnsmasq-dns-dc6887bc5-9qc5w\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.074374 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.117455 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-627dx\" (UniqueName: \"kubernetes.io/projected/62c8714f-6773-4587-a147-350224a81832-kube-api-access-627dx\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.117511 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.119224 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data-custom\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.119358 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.119388 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c8714f-6773-4587-a147-350224a81832-logs\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.119432 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-scripts\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.119495 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62c8714f-6773-4587-a147-350224a81832-etc-machine-id\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.121585 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62c8714f-6773-4587-a147-350224a81832-etc-machine-id\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.121946 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c8714f-6773-4587-a147-350224a81832-logs\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.128203 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.128539 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data-custom\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.144136 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.194374 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-627dx\" (UniqueName: \"kubernetes.io/projected/62c8714f-6773-4587-a147-350224a81832-kube-api-access-627dx\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.195508 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-scripts\") pod \"cinder-api-0\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.219203 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.442012 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.500464 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65db494558-68jff"] Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.518067 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gcb27" Nov 29 07:40:21 crc kubenswrapper[4660]: W1129 07:40:21.522358 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25ca5104_7d38_40bc_aa55_19dbd28b40f3.slice/crio-8ab0b12754c9c61b3f9be8f1597b71a140a8ec1cb350670a730adc014d3b00b4 WatchSource:0}: Error finding container 8ab0b12754c9c61b3f9be8f1597b71a140a8ec1cb350670a730adc014d3b00b4: Status 404 returned error can't find the container with id 8ab0b12754c9c61b3f9be8f1597b71a140a8ec1cb350670a730adc014d3b00b4 Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.528475 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-combined-ca-bundle\") pod \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.528517 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm5xj\" (UniqueName: \"kubernetes.io/projected/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-kube-api-access-dm5xj\") pod \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.528544 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-db-sync-config-data\") pod \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.528584 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-config-data\") pod \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\" (UID: \"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.548173 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-kube-api-access-dm5xj" (OuterVolumeSpecName: "kube-api-access-dm5xj") pod "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" (UID: "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf"). InnerVolumeSpecName "kube-api-access-dm5xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.549723 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" (UID: "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.588782 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65db494558-68jff" event={"ID":"25ca5104-7d38-40bc-aa55-19dbd28b40f3","Type":"ContainerStarted","Data":"8ab0b12754c9c61b3f9be8f1597b71a140a8ec1cb350670a730adc014d3b00b4"} Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.595443 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-config-data" (OuterVolumeSpecName: "config-data") pod "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" (UID: "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.598014 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerStarted","Data":"67af91c3cf013e4b27c4bd8343de69a8f203458bd5cb904f40948b43b415b7b2"} Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.598247 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="ceilometer-notification-agent" containerID="cri-o://4e1b4c70933dd9a88006812ea0df83f5430cdc4a139b486ce5fc77b6b709e8b3" gracePeriod=30 Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.598544 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.598926 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="proxy-httpd" containerID="cri-o://67af91c3cf013e4b27c4bd8343de69a8f203458bd5cb904f40948b43b415b7b2" gracePeriod=30 Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.599032 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="sg-core" containerID="cri-o://b5a924b64e86c5a613c2da804af848d209c5829c1269d5898873185400f9fe2a" gracePeriod=30 Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.600861 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" (UID: "0a4ba5b4-3360-458f-8de9-6c0630ad7cbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.611415 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vxrqr" event={"ID":"a8e0c494-1877-49d7-8877-308fb75d13b1","Type":"ContainerStarted","Data":"49c823586286d21c211797d636011301862e0c8db42df626505293156f102fbe"} Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.614269 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.614770 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gcb27" event={"ID":"0a4ba5b4-3360-458f-8de9-6c0630ad7cbf","Type":"ContainerDied","Data":"2746c37f61f1c7bce2c5d98fd0d6f0d92b8181b7baaa3104ff5126a05125e600"} Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.614792 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2746c37f61f1c7bce2c5d98fd0d6f0d92b8181b7baaa3104ff5126a05125e600" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.614970 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gcb27" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.630339 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.630374 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm5xj\" (UniqueName: \"kubernetes.io/projected/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-kube-api-access-dm5xj\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.630386 4660 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.630397 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.656586 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.680666 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.688881 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-vxrqr" podStartSLOduration=12.494007532 podStartE2EDuration="1m34.688863025s" podCreationTimestamp="2025-11-29 07:38:47 +0000 UTC" firstStartedPulling="2025-11-29 07:38:57.732779414 +0000 UTC m=+1428.286309313" lastFinishedPulling="2025-11-29 07:40:19.927634907 +0000 UTC m=+1510.481164806" observedRunningTime="2025-11-29 07:40:21.664584117 +0000 UTC m=+1512.218114016" watchObservedRunningTime="2025-11-29 07:40:21.688863025 +0000 UTC m=+1512.242392924" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.733033 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxstv\" (UniqueName: \"kubernetes.io/projected/736e39ee-a914-407d-a11e-fd81a8ba7679-kube-api-access-vxstv\") pod \"736e39ee-a914-407d-a11e-fd81a8ba7679\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.733080 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-svc\") pod \"736e39ee-a914-407d-a11e-fd81a8ba7679\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.733130 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-sb\") pod \"736e39ee-a914-407d-a11e-fd81a8ba7679\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.733771 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "736e39ee-a914-407d-a11e-fd81a8ba7679" (UID: "736e39ee-a914-407d-a11e-fd81a8ba7679"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.734322 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "736e39ee-a914-407d-a11e-fd81a8ba7679" (UID: "736e39ee-a914-407d-a11e-fd81a8ba7679"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.742784 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736e39ee-a914-407d-a11e-fd81a8ba7679-kube-api-access-vxstv" (OuterVolumeSpecName: "kube-api-access-vxstv") pod "736e39ee-a914-407d-a11e-fd81a8ba7679" (UID: "736e39ee-a914-407d-a11e-fd81a8ba7679"). InnerVolumeSpecName "kube-api-access-vxstv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: W1129 07:40:21.766825 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a829b0a_ecfa_4804_9614_7db77030e07c.slice/crio-12900f65fd5d4143dbeb9ec301f5bdcbc462ae86ef253a0402711e625f0c30c0 WatchSource:0}: Error finding container 12900f65fd5d4143dbeb9ec301f5bdcbc462ae86ef253a0402711e625f0c30c0: Status 404 returned error can't find the container with id 12900f65fd5d4143dbeb9ec301f5bdcbc462ae86ef253a0402711e625f0c30c0 Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.836081 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-config\") pod \"736e39ee-a914-407d-a11e-fd81a8ba7679\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.836193 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-swift-storage-0\") pod \"736e39ee-a914-407d-a11e-fd81a8ba7679\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.836209 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-nb\") pod \"736e39ee-a914-407d-a11e-fd81a8ba7679\" (UID: \"736e39ee-a914-407d-a11e-fd81a8ba7679\") " Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.836680 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxstv\" (UniqueName: \"kubernetes.io/projected/736e39ee-a914-407d-a11e-fd81a8ba7679-kube-api-access-vxstv\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.836691 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.836698 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.836916 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-config" (OuterVolumeSpecName: "config") pod "736e39ee-a914-407d-a11e-fd81a8ba7679" (UID: "736e39ee-a914-407d-a11e-fd81a8ba7679"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.837105 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "736e39ee-a914-407d-a11e-fd81a8ba7679" (UID: "736e39ee-a914-407d-a11e-fd81a8ba7679"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.837540 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "736e39ee-a914-407d-a11e-fd81a8ba7679" (UID: "736e39ee-a914-407d-a11e-fd81a8ba7679"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:21 crc kubenswrapper[4660]: E1129 07:40:21.923389 4660 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a4ba5b4_3360_458f_8de9_6c0630ad7cbf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod994934b0_1ed3_4a63_b231_34e923c9a2ad.slice/crio-67af91c3cf013e4b27c4bd8343de69a8f203458bd5cb904f40948b43b415b7b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a4ba5b4_3360_458f_8de9_6c0630ad7cbf.slice/crio-2746c37f61f1c7bce2c5d98fd0d6f0d92b8181b7baaa3104ff5126a05125e600\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod994934b0_1ed3_4a63_b231_34e923c9a2ad.slice/crio-b5a924b64e86c5a613c2da804af848d209c5829c1269d5898873185400f9fe2a.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.938963 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.939203 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.939331 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736e39ee-a914-407d-a11e-fd81a8ba7679-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:21 crc kubenswrapper[4660]: I1129 07:40:21.955079 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f844c8dbc-j8g6j"] Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.238128 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-9qc5w"] Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.289971 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq"] Nov 29 07:40:22 crc kubenswrapper[4660]: E1129 07:40:22.290338 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" containerName="glance-db-sync" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.290357 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" containerName="glance-db-sync" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.290513 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" containerName="glance-db-sync" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.291378 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.328246 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq"] Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.347701 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.347748 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.347771 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.347793 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.347814 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-config\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.347827 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfv6\" (UniqueName: \"kubernetes.io/projected/e039299e-a02d-4f10-aa3a-d755d77cc9ac-kube-api-access-8gfv6\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.449636 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.449713 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.449745 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.449772 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.449795 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gfv6\" (UniqueName: \"kubernetes.io/projected/e039299e-a02d-4f10-aa3a-d755d77cc9ac-kube-api-access-8gfv6\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.449810 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-config\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.450885 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-config\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.451351 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.459255 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.460133 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.462002 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.505283 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gfv6\" (UniqueName: \"kubernetes.io/projected/e039299e-a02d-4f10-aa3a-d755d77cc9ac-kube-api-access-8gfv6\") pod \"dnsmasq-dns-5cc8b5d5c5-f6wcq\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.611396 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.646698 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.653854 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a829b0a-ecfa-4804-9614-7db77030e07c","Type":"ContainerStarted","Data":"12900f65fd5d4143dbeb9ec301f5bdcbc462ae86ef253a0402711e625f0c30c0"} Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.655152 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f844c8dbc-j8g6j" event={"ID":"b1f5216f-e274-4987-b2cc-98effb9661eb","Type":"ContainerStarted","Data":"0af9cbb86688d9ab68a7fb6064177026fc218fb75b2457e9ee842e8c0c4256ad"} Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.657158 4660 generic.go:334] "Generic (PLEG): container finished" podID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerID="67af91c3cf013e4b27c4bd8343de69a8f203458bd5cb904f40948b43b415b7b2" exitCode=0 Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.657181 4660 generic.go:334] "Generic (PLEG): container finished" podID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerID="b5a924b64e86c5a613c2da804af848d209c5829c1269d5898873185400f9fe2a" exitCode=2 Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.657245 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-b2krk" Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.657248 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerDied","Data":"67af91c3cf013e4b27c4bd8343de69a8f203458bd5cb904f40948b43b415b7b2"} Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.657281 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerDied","Data":"b5a924b64e86c5a613c2da804af848d209c5829c1269d5898873185400f9fe2a"} Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.713665 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9c946b766-6n2bk"] Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.758551 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-b2krk"] Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.795697 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-b2krk"] Nov 29 07:40:22 crc kubenswrapper[4660]: I1129 07:40:22.827172 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-9qc5w"] Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.081948 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.085382 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.092594 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h6jp5" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.093104 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.093747 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.152870 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.177714 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-logs\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.177785 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.177822 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.177850 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.177872 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.177902 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.177954 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xxs\" (UniqueName: \"kubernetes.io/projected/8a1e7728-2187-4765-9308-ed9c8f39dbb7-kube-api-access-67xxs\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.279590 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.279979 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.280256 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.280274 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.280320 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.280393 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xxs\" (UniqueName: \"kubernetes.io/projected/8a1e7728-2187-4765-9308-ed9c8f39dbb7-kube-api-access-67xxs\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.283788 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-logs\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.284632 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-logs\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.286128 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.286799 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.301707 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.310842 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.312823 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.333979 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xxs\" (UniqueName: \"kubernetes.io/projected/8a1e7728-2187-4765-9308-ed9c8f39dbb7-kube-api-access-67xxs\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.343321 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.496111 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.515232 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.515353 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.516852 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.523218 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.591122 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.591191 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.591213 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.591229 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.591253 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.591284 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcvfx\" (UniqueName: \"kubernetes.io/projected/eb3f5ee7-fb58-4344-9550-c996316c256c-kube-api-access-dcvfx\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.591330 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.685538 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" event={"ID":"4e38826a-73aa-428f-a6f7-a18b0184a18a","Type":"ContainerStarted","Data":"98a34471265444027f3baac5f54fd91191919c72130b173288c5da23f0c0d39f"} Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.692633 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.692696 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcvfx\" (UniqueName: \"kubernetes.io/projected/eb3f5ee7-fb58-4344-9550-c996316c256c-kube-api-access-dcvfx\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.692745 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.692825 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.692879 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.692903 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.692953 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.693063 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.693805 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.694163 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.703862 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.706456 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.744813 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcvfx\" (UniqueName: \"kubernetes.io/projected/eb3f5ee7-fb58-4344-9550-c996316c256c-kube-api-access-dcvfx\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.744894 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.746066 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736e39ee-a914-407d-a11e-fd81a8ba7679" path="/var/lib/kubelet/pods/736e39ee-a914-407d-a11e-fd81a8ba7679/volumes" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.746471 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"62c8714f-6773-4587-a147-350224a81832","Type":"ContainerStarted","Data":"09d8373a56a5725a09ef4f5c73f280cf7bef3329ffd86bd69e5dc43a987eeafb"} Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.748878 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c946b766-6n2bk" event={"ID":"83bbeb12-7456-4c22-8d8d-06e569201498","Type":"ContainerStarted","Data":"a6bd390f64572a61401cdd839775000daac71280b7507dff45e0df1144559afd"} Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.755066 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq"] Nov 29 07:40:23 crc kubenswrapper[4660]: W1129 07:40:23.778409 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode039299e_a02d_4f10_aa3a_d755d77cc9ac.slice/crio-52dbf3b62b81f099c942dad074afca350382a273f78c419bf1baf0e2929e1520 WatchSource:0}: Error finding container 52dbf3b62b81f099c942dad074afca350382a273f78c419bf1baf0e2929e1520: Status 404 returned error can't find the container with id 52dbf3b62b81f099c942dad074afca350382a273f78c419bf1baf0e2929e1520 Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.788000 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:23 crc kubenswrapper[4660]: I1129 07:40:23.886093 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.586322 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.808083 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a1e7728-2187-4765-9308-ed9c8f39dbb7","Type":"ContainerStarted","Data":"77dc1d92a1f0ab7465b9a50ce03d61483aeefa1bc9183d89b913fca2dc7f64f0"} Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.812115 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c946b766-6n2bk" event={"ID":"83bbeb12-7456-4c22-8d8d-06e569201498","Type":"ContainerStarted","Data":"46c07f3efd0ec9a7da11e54c925e9e525e9f543d7dbb4dc48f67851681658bc3"} Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.812156 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.812169 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c946b766-6n2bk" event={"ID":"83bbeb12-7456-4c22-8d8d-06e569201498","Type":"ContainerStarted","Data":"63271c9e08c57b7c765be46e23c65aa9aa329af5e01f1c8a1646076995cd95b5"} Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.812194 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.820410 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" event={"ID":"e039299e-a02d-4f10-aa3a-d755d77cc9ac","Type":"ContainerStarted","Data":"8e8933b2fec35223ce9ddbf8531693404ddbd1959810a7b5e6aa15240eb192e3"} Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.820455 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" event={"ID":"e039299e-a02d-4f10-aa3a-d755d77cc9ac","Type":"ContainerStarted","Data":"52dbf3b62b81f099c942dad074afca350382a273f78c419bf1baf0e2929e1520"} Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.832094 4660 generic.go:334] "Generic (PLEG): container finished" podID="4e38826a-73aa-428f-a6f7-a18b0184a18a" containerID="ac61f18e36cae006fde7da5d7f309adf076dd2c12b49e31594864686adb778a3" exitCode=0 Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.832155 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" event={"ID":"4e38826a-73aa-428f-a6f7-a18b0184a18a","Type":"ContainerDied","Data":"ac61f18e36cae006fde7da5d7f309adf076dd2c12b49e31594864686adb778a3"} Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.864254 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9c946b766-6n2bk" podStartSLOduration=4.8642348460000004 podStartE2EDuration="4.864234846s" podCreationTimestamp="2025-11-29 07:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:24.843214408 +0000 UTC m=+1515.396744307" watchObservedRunningTime="2025-11-29 07:40:24.864234846 +0000 UTC m=+1515.417764745" Nov 29 07:40:24 crc kubenswrapper[4660]: I1129 07:40:24.870791 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"62c8714f-6773-4587-a147-350224a81832","Type":"ContainerStarted","Data":"e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f"} Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.123478 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.438399 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.477643 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftd7w\" (UniqueName: \"kubernetes.io/projected/4e38826a-73aa-428f-a6f7-a18b0184a18a-kube-api-access-ftd7w\") pod \"4e38826a-73aa-428f-a6f7-a18b0184a18a\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.477742 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-sb\") pod \"4e38826a-73aa-428f-a6f7-a18b0184a18a\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.477811 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-svc\") pod \"4e38826a-73aa-428f-a6f7-a18b0184a18a\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.477834 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-nb\") pod \"4e38826a-73aa-428f-a6f7-a18b0184a18a\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.480153 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-swift-storage-0\") pod \"4e38826a-73aa-428f-a6f7-a18b0184a18a\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.480177 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-config\") pod \"4e38826a-73aa-428f-a6f7-a18b0184a18a\" (UID: \"4e38826a-73aa-428f-a6f7-a18b0184a18a\") " Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.501322 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e38826a-73aa-428f-a6f7-a18b0184a18a-kube-api-access-ftd7w" (OuterVolumeSpecName: "kube-api-access-ftd7w") pod "4e38826a-73aa-428f-a6f7-a18b0184a18a" (UID: "4e38826a-73aa-428f-a6f7-a18b0184a18a"). InnerVolumeSpecName "kube-api-access-ftd7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.531021 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-config" (OuterVolumeSpecName: "config") pod "4e38826a-73aa-428f-a6f7-a18b0184a18a" (UID: "4e38826a-73aa-428f-a6f7-a18b0184a18a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.543166 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4e38826a-73aa-428f-a6f7-a18b0184a18a" (UID: "4e38826a-73aa-428f-a6f7-a18b0184a18a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.545601 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e38826a-73aa-428f-a6f7-a18b0184a18a" (UID: "4e38826a-73aa-428f-a6f7-a18b0184a18a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.593325 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftd7w\" (UniqueName: \"kubernetes.io/projected/4e38826a-73aa-428f-a6f7-a18b0184a18a-kube-api-access-ftd7w\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.593622 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.593634 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.593642 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.594682 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e38826a-73aa-428f-a6f7-a18b0184a18a" (UID: "4e38826a-73aa-428f-a6f7-a18b0184a18a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.597250 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e38826a-73aa-428f-a6f7-a18b0184a18a" (UID: "4e38826a-73aa-428f-a6f7-a18b0184a18a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.695517 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.695562 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e38826a-73aa-428f-a6f7-a18b0184a18a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.894784 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.896656 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6887bc5-9qc5w" event={"ID":"4e38826a-73aa-428f-a6f7-a18b0184a18a","Type":"ContainerDied","Data":"98a34471265444027f3baac5f54fd91191919c72130b173288c5da23f0c0d39f"} Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.896694 4660 scope.go:117] "RemoveContainer" containerID="ac61f18e36cae006fde7da5d7f309adf076dd2c12b49e31594864686adb778a3" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.911793 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb3f5ee7-fb58-4344-9550-c996316c256c","Type":"ContainerStarted","Data":"1c768356967ba564d8d4bbac4211096b70539c7efe4010335ad103dbcfb534e1"} Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.928093 4660 generic.go:334] "Generic (PLEG): container finished" podID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerID="8e8933b2fec35223ce9ddbf8531693404ddbd1959810a7b5e6aa15240eb192e3" exitCode=0 Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.929393 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" event={"ID":"e039299e-a02d-4f10-aa3a-d755d77cc9ac","Type":"ContainerDied","Data":"8e8933b2fec35223ce9ddbf8531693404ddbd1959810a7b5e6aa15240eb192e3"} Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.929439 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" event={"ID":"e039299e-a02d-4f10-aa3a-d755d77cc9ac","Type":"ContainerStarted","Data":"1dc3033bf1d2b802bc727bd7216a1fcaa5c37a0197d8f9a0293c9823dbe86f60"} Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.931072 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:25 crc kubenswrapper[4660]: I1129 07:40:25.975359 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" podStartSLOduration=3.9753376190000003 podStartE2EDuration="3.975337619s" podCreationTimestamp="2025-11-29 07:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:25.968588734 +0000 UTC m=+1516.522118643" watchObservedRunningTime="2025-11-29 07:40:25.975337619 +0000 UTC m=+1516.528867518" Nov 29 07:40:26 crc kubenswrapper[4660]: I1129 07:40:26.052062 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-9qc5w"] Nov 29 07:40:26 crc kubenswrapper[4660]: I1129 07:40:26.094199 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-9qc5w"] Nov 29 07:40:26 crc kubenswrapper[4660]: I1129 07:40:26.952386 4660 generic.go:334] "Generic (PLEG): container finished" podID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerID="4e1b4c70933dd9a88006812ea0df83f5430cdc4a139b486ce5fc77b6b709e8b3" exitCode=0 Nov 29 07:40:26 crc kubenswrapper[4660]: I1129 07:40:26.952650 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerDied","Data":"4e1b4c70933dd9a88006812ea0df83f5430cdc4a139b486ce5fc77b6b709e8b3"} Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:26.957999 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a1e7728-2187-4765-9308-ed9c8f39dbb7","Type":"ContainerStarted","Data":"41184a8dc4c3b973d66261cddd7157511309dd91d0cb1cc49776567c504bac76"} Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.168239 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76565fb74d-wgqb4" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.284787 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.298106 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.336435 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5d8477fd94-v56g5" podUID="953f9580-5907-45bf-ae44-e48149acc44c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.359705 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-log-httpd\") pod \"994934b0-1ed3-4a63-b231-34e923c9a2ad\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.359792 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgl55\" (UniqueName: \"kubernetes.io/projected/994934b0-1ed3-4a63-b231-34e923c9a2ad-kube-api-access-jgl55\") pod \"994934b0-1ed3-4a63-b231-34e923c9a2ad\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.359815 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-combined-ca-bundle\") pod \"994934b0-1ed3-4a63-b231-34e923c9a2ad\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.359841 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-scripts\") pod \"994934b0-1ed3-4a63-b231-34e923c9a2ad\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.359886 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-run-httpd\") pod \"994934b0-1ed3-4a63-b231-34e923c9a2ad\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.359954 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-config-data\") pod \"994934b0-1ed3-4a63-b231-34e923c9a2ad\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.360063 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-sg-core-conf-yaml\") pod \"994934b0-1ed3-4a63-b231-34e923c9a2ad\" (UID: \"994934b0-1ed3-4a63-b231-34e923c9a2ad\") " Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.364015 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "994934b0-1ed3-4a63-b231-34e923c9a2ad" (UID: "994934b0-1ed3-4a63-b231-34e923c9a2ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.364204 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "994934b0-1ed3-4a63-b231-34e923c9a2ad" (UID: "994934b0-1ed3-4a63-b231-34e923c9a2ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.380965 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-scripts" (OuterVolumeSpecName: "scripts") pod "994934b0-1ed3-4a63-b231-34e923c9a2ad" (UID: "994934b0-1ed3-4a63-b231-34e923c9a2ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.389927 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994934b0-1ed3-4a63-b231-34e923c9a2ad-kube-api-access-jgl55" (OuterVolumeSpecName: "kube-api-access-jgl55") pod "994934b0-1ed3-4a63-b231-34e923c9a2ad" (UID: "994934b0-1ed3-4a63-b231-34e923c9a2ad"). InnerVolumeSpecName "kube-api-access-jgl55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.403714 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.432021 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.463993 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgl55\" (UniqueName: \"kubernetes.io/projected/994934b0-1ed3-4a63-b231-34e923c9a2ad-kube-api-access-jgl55\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.464028 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.464039 4660 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.464050 4660 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/994934b0-1ed3-4a63-b231-34e923c9a2ad-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.511308 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "994934b0-1ed3-4a63-b231-34e923c9a2ad" (UID: "994934b0-1ed3-4a63-b231-34e923c9a2ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.566455 4660 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.612129 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "994934b0-1ed3-4a63-b231-34e923c9a2ad" (UID: "994934b0-1ed3-4a63-b231-34e923c9a2ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.670183 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.682026 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-config-data" (OuterVolumeSpecName: "config-data") pod "994934b0-1ed3-4a63-b231-34e923c9a2ad" (UID: "994934b0-1ed3-4a63-b231-34e923c9a2ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.710179 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e38826a-73aa-428f-a6f7-a18b0184a18a" path="/var/lib/kubelet/pods/4e38826a-73aa-428f-a6f7-a18b0184a18a/volumes" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.775776 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/994934b0-1ed3-4a63-b231-34e923c9a2ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.973458 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb3f5ee7-fb58-4344-9550-c996316c256c","Type":"ContainerStarted","Data":"1cd84702aaf7f54338aa618c9459e93156eccb4415121b11d56c52cd5a9fd41a"} Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.979117 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"62c8714f-6773-4587-a147-350224a81832","Type":"ContainerStarted","Data":"78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288"} Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.979478 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.991958 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"994934b0-1ed3-4a63-b231-34e923c9a2ad","Type":"ContainerDied","Data":"aa274ed67d854999818914e50d939357464925d8ff5ea5fd6eb4cde1581e78e6"} Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.992007 4660 scope.go:117] "RemoveContainer" containerID="67af91c3cf013e4b27c4bd8343de69a8f203458bd5cb904f40948b43b415b7b2" Nov 29 07:40:27 crc kubenswrapper[4660]: I1129 07:40:27.992106 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.005698 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.005682865 podStartE2EDuration="8.005682865s" podCreationTimestamp="2025-11-29 07:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:28.002943409 +0000 UTC m=+1518.556473308" watchObservedRunningTime="2025-11-29 07:40:28.005682865 +0000 UTC m=+1518.559212764" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.005964 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a829b0a-ecfa-4804-9614-7db77030e07c","Type":"ContainerStarted","Data":"650dd2807d241ca0e5ec3a42fe7b7a169cbdb6ac7bbf9a866a6066822f280d34"} Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.054935 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.084368 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.100669 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:28 crc kubenswrapper[4660]: E1129 07:40:28.101205 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e38826a-73aa-428f-a6f7-a18b0184a18a" containerName="init" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101226 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e38826a-73aa-428f-a6f7-a18b0184a18a" containerName="init" Nov 29 07:40:28 crc kubenswrapper[4660]: E1129 07:40:28.101237 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="sg-core" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101242 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="sg-core" Nov 29 07:40:28 crc kubenswrapper[4660]: E1129 07:40:28.101252 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="ceilometer-notification-agent" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101259 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="ceilometer-notification-agent" Nov 29 07:40:28 crc kubenswrapper[4660]: E1129 07:40:28.101274 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="proxy-httpd" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101308 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="proxy-httpd" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101678 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e38826a-73aa-428f-a6f7-a18b0184a18a" containerName="init" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101691 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="ceilometer-notification-agent" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101734 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="proxy-httpd" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.101746 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" containerName="sg-core" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.104451 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.107031 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.110144 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.111436 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.186105 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkd6c\" (UniqueName: \"kubernetes.io/projected/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-kube-api-access-vkd6c\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.186178 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.187220 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.187313 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-config-data\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.187395 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.187479 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.187576 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-scripts\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.289622 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.289674 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.289698 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-config-data\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.289744 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.289769 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.289801 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-scripts\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.289842 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkd6c\" (UniqueName: \"kubernetes.io/projected/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-kube-api-access-vkd6c\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.290239 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.291283 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.291897 4660 scope.go:117] "RemoveContainer" containerID="b5a924b64e86c5a613c2da804af848d209c5829c1269d5898873185400f9fe2a" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.356545 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-scripts\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.384562 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.385526 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-config-data\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.393838 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkd6c\" (UniqueName: \"kubernetes.io/projected/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-kube-api-access-vkd6c\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.411124 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " pod="openstack/ceilometer-0" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.494599 4660 scope.go:117] "RemoveContainer" containerID="4e1b4c70933dd9a88006812ea0df83f5430cdc4a139b486ce5fc77b6b709e8b3" Nov 29 07:40:28 crc kubenswrapper[4660]: I1129 07:40:28.584218 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.066998 4660 generic.go:334] "Generic (PLEG): container finished" podID="a8e0c494-1877-49d7-8877-308fb75d13b1" containerID="49c823586286d21c211797d636011301862e0c8db42df626505293156f102fbe" exitCode=0 Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.067084 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vxrqr" event={"ID":"a8e0c494-1877-49d7-8877-308fb75d13b1","Type":"ContainerDied","Data":"49c823586286d21c211797d636011301862e0c8db42df626505293156f102fbe"} Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.070886 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65db494558-68jff" event={"ID":"25ca5104-7d38-40bc-aa55-19dbd28b40f3","Type":"ContainerStarted","Data":"94d7b5a4d95a849103f223efc8ea4264b2e182870a85148c3d78f74511d33cb5"} Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.073415 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f844c8dbc-j8g6j" event={"ID":"b1f5216f-e274-4987-b2cc-98effb9661eb","Type":"ContainerStarted","Data":"47e39a6fc4fa8a11d427dd7fcf1a3d35e08778579ab70d34bf8fe4277aee2be6"} Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.075137 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api-log" containerID="cri-o://e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f" gracePeriod=30 Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.075229 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api" containerID="cri-o://78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288" gracePeriod=30 Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.277478 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:40:29 crc kubenswrapper[4660]: I1129 07:40:29.707783 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994934b0-1ed3-4a63-b231-34e923c9a2ad" path="/var/lib/kubelet/pods/994934b0-1ed3-4a63-b231-34e923c9a2ad/volumes" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.104127 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65db494558-68jff" event={"ID":"25ca5104-7d38-40bc-aa55-19dbd28b40f3","Type":"ContainerStarted","Data":"f0f68b7724d435ef4f14972ae2d412f33d811373f32c3c440bc9b03cf9e40def"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.118010 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.125137 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f844c8dbc-j8g6j" event={"ID":"b1f5216f-e274-4987-b2cc-98effb9661eb","Type":"ContainerStarted","Data":"d4c7d3c07f659904ce69e4bd537e310019adaa2886e800f2ea923670bbfe2698"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.125752 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65db494558-68jff" podStartSLOduration=3.373916685 podStartE2EDuration="10.125726148s" podCreationTimestamp="2025-11-29 07:40:20 +0000 UTC" firstStartedPulling="2025-11-29 07:40:21.540312276 +0000 UTC m=+1512.093842175" lastFinishedPulling="2025-11-29 07:40:28.292121739 +0000 UTC m=+1518.845651638" observedRunningTime="2025-11-29 07:40:30.122304714 +0000 UTC m=+1520.675834613" watchObservedRunningTime="2025-11-29 07:40:30.125726148 +0000 UTC m=+1520.679256047" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.128404 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb3f5ee7-fb58-4344-9550-c996316c256c","Type":"ContainerStarted","Data":"33a03a2be56a887f58bd58c3155f83a1af441933a66f0a72385717d5a0353ee0"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.128538 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-log" containerID="cri-o://1cd84702aaf7f54338aa618c9459e93156eccb4415121b11d56c52cd5a9fd41a" gracePeriod=30 Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.129286 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-httpd" containerID="cri-o://33a03a2be56a887f58bd58c3155f83a1af441933a66f0a72385717d5a0353ee0" gracePeriod=30 Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.143648 4660 generic.go:334] "Generic (PLEG): container finished" podID="62c8714f-6773-4587-a147-350224a81832" containerID="78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288" exitCode=0 Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.143675 4660 generic.go:334] "Generic (PLEG): container finished" podID="62c8714f-6773-4587-a147-350224a81832" containerID="e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f" exitCode=143 Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.143712 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"62c8714f-6773-4587-a147-350224a81832","Type":"ContainerDied","Data":"78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.143736 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"62c8714f-6773-4587-a147-350224a81832","Type":"ContainerDied","Data":"e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.143751 4660 scope.go:117] "RemoveContainer" containerID="78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.143834 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.164114 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62c8714f-6773-4587-a147-350224a81832-etc-machine-id\") pod \"62c8714f-6773-4587-a147-350224a81832\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.164146 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c8714f-6773-4587-a147-350224a81832-logs\") pod \"62c8714f-6773-4587-a147-350224a81832\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.164260 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-combined-ca-bundle\") pod \"62c8714f-6773-4587-a147-350224a81832\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.164312 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-scripts\") pod \"62c8714f-6773-4587-a147-350224a81832\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.164374 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data-custom\") pod \"62c8714f-6773-4587-a147-350224a81832\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.164391 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data\") pod \"62c8714f-6773-4587-a147-350224a81832\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.164407 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-627dx\" (UniqueName: \"kubernetes.io/projected/62c8714f-6773-4587-a147-350224a81832-kube-api-access-627dx\") pod \"62c8714f-6773-4587-a147-350224a81832\" (UID: \"62c8714f-6773-4587-a147-350224a81832\") " Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.165459 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c8714f-6773-4587-a147-350224a81832-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "62c8714f-6773-4587-a147-350224a81832" (UID: "62c8714f-6773-4587-a147-350224a81832"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.165737 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c8714f-6773-4587-a147-350224a81832-logs" (OuterVolumeSpecName: "logs") pod "62c8714f-6773-4587-a147-350224a81832" (UID: "62c8714f-6773-4587-a147-350224a81832"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.212757 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerStarted","Data":"0f723a8e689412e18eb3c5ec8058e3cfd349c7efee72dcb14e1d446a074d0058"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.216439 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a1e7728-2187-4765-9308-ed9c8f39dbb7","Type":"ContainerStarted","Data":"41f8aa6b255e722aedc8a45922e1fe0efe3a3d94cfac7313eebbd27e33f44404"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.216692 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-log" containerID="cri-o://41184a8dc4c3b973d66261cddd7157511309dd91d0cb1cc49776567c504bac76" gracePeriod=30 Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.217967 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-httpd" containerID="cri-o://41f8aa6b255e722aedc8a45922e1fe0efe3a3d94cfac7313eebbd27e33f44404" gracePeriod=30 Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.240703 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-scripts" (OuterVolumeSpecName: "scripts") pod "62c8714f-6773-4587-a147-350224a81832" (UID: "62c8714f-6773-4587-a147-350224a81832"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.260746 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a829b0a-ecfa-4804-9614-7db77030e07c","Type":"ContainerStarted","Data":"0ec788f5a15dfe6e155c5581d3cec22f4464ee17c00d1d7c9db210cf86db8689"} Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.262548 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.262536094 podStartE2EDuration="8.262536094s" podCreationTimestamp="2025-11-29 07:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:30.218020949 +0000 UTC m=+1520.771550858" watchObservedRunningTime="2025-11-29 07:40:30.262536094 +0000 UTC m=+1520.816065993" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.270592 4660 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62c8714f-6773-4587-a147-350224a81832-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.270640 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62c8714f-6773-4587-a147-350224a81832-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.270652 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.276669 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c8714f-6773-4587-a147-350224a81832-kube-api-access-627dx" (OuterVolumeSpecName: "kube-api-access-627dx") pod "62c8714f-6773-4587-a147-350224a81832" (UID: "62c8714f-6773-4587-a147-350224a81832"). InnerVolumeSpecName "kube-api-access-627dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.282364 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f844c8dbc-j8g6j" podStartSLOduration=3.808911399 podStartE2EDuration="10.28234307s" podCreationTimestamp="2025-11-29 07:40:20 +0000 UTC" firstStartedPulling="2025-11-29 07:40:22.036468752 +0000 UTC m=+1512.589998651" lastFinishedPulling="2025-11-29 07:40:28.509900413 +0000 UTC m=+1519.063430322" observedRunningTime="2025-11-29 07:40:30.2783496 +0000 UTC m=+1520.831879499" watchObservedRunningTime="2025-11-29 07:40:30.28234307 +0000 UTC m=+1520.835872969" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.298913 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "62c8714f-6773-4587-a147-350224a81832" (UID: "62c8714f-6773-4587-a147-350224a81832"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.312061 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62c8714f-6773-4587-a147-350224a81832" (UID: "62c8714f-6773-4587-a147-350224a81832"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.327238 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.634039371 podStartE2EDuration="10.327220105s" podCreationTimestamp="2025-11-29 07:40:20 +0000 UTC" firstStartedPulling="2025-11-29 07:40:21.905757345 +0000 UTC m=+1512.459287234" lastFinishedPulling="2025-11-29 07:40:25.598938069 +0000 UTC m=+1516.152467968" observedRunningTime="2025-11-29 07:40:30.322021151 +0000 UTC m=+1520.875551050" watchObservedRunningTime="2025-11-29 07:40:30.327220105 +0000 UTC m=+1520.880750004" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.372919 4660 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.372945 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-627dx\" (UniqueName: \"kubernetes.io/projected/62c8714f-6773-4587-a147-350224a81832-kube-api-access-627dx\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.372956 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.410816 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data" (OuterVolumeSpecName: "config-data") pod "62c8714f-6773-4587-a147-350224a81832" (UID: "62c8714f-6773-4587-a147-350224a81832"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.457572 4660 scope.go:117] "RemoveContainer" containerID="e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.478205 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c8714f-6773-4587-a147-350224a81832-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.522231 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.522210242 podStartE2EDuration="8.522210242s" podCreationTimestamp="2025-11-29 07:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:30.386990719 +0000 UTC m=+1520.940520618" watchObservedRunningTime="2025-11-29 07:40:30.522210242 +0000 UTC m=+1521.075740141" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.559947 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.571126 4660 scope.go:117] "RemoveContainer" containerID="78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288" Nov 29 07:40:30 crc kubenswrapper[4660]: E1129 07:40:30.573039 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288\": container with ID starting with 78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288 not found: ID does not exist" containerID="78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.573067 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288"} err="failed to get container status \"78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288\": rpc error: code = NotFound desc = could not find container \"78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288\": container with ID starting with 78c9be76c687fc209d52a2344832b311a5ee2903fd22c56a78c81092df189288 not found: ID does not exist" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.573090 4660 scope.go:117] "RemoveContainer" containerID="e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f" Nov 29 07:40:30 crc kubenswrapper[4660]: E1129 07:40:30.589188 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f\": container with ID starting with e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f not found: ID does not exist" containerID="e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.589227 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f"} err="failed to get container status \"e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f\": rpc error: code = NotFound desc = could not find container \"e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f\": container with ID starting with e444e666165f88eafa45a21b57486e5380b3766ed0688dfa0ac6dc1d6c81704f not found: ID does not exist" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.610001 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.656095 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.716082 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:30 crc kubenswrapper[4660]: E1129 07:40:30.716852 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.716869 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api" Nov 29 07:40:30 crc kubenswrapper[4660]: E1129 07:40:30.716887 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api-log" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.716894 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api-log" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.717110 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.717122 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c8714f-6773-4587-a147-350224a81832" containerName="cinder-api-log" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.718143 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.725630 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.725833 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.726038 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.733523 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.810803 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.810866 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.810891 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-config-data\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.810910 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.810968 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-logs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.810985 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4q62\" (UniqueName: \"kubernetes.io/projected/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-kube-api-access-j4q62\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.811012 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.811026 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-scripts\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.811099 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.916670 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-scripts\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.916724 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.916795 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.916862 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.916903 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.916938 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-config-data\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.916964 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.917029 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-logs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.917051 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4q62\" (UniqueName: \"kubernetes.io/projected/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-kube-api-access-j4q62\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.918313 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.926130 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-logs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.933041 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.935219 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.936721 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.937162 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-scripts\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.938651 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.947948 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-config-data\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:30 crc kubenswrapper[4660]: I1129 07:40:30.954206 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4q62\" (UniqueName: \"kubernetes.io/projected/9b2bdc67-626d-4aa5-94ff-d413be98dc7c-kube-api-access-j4q62\") pod \"cinder-api-0\" (UID: \"9b2bdc67-626d-4aa5-94ff-d413be98dc7c\") " pod="openstack/cinder-api-0" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.010067 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vxrqr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.050729 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.120378 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt8rs\" (UniqueName: \"kubernetes.io/projected/a8e0c494-1877-49d7-8877-308fb75d13b1-kube-api-access-gt8rs\") pod \"a8e0c494-1877-49d7-8877-308fb75d13b1\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.120755 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-config-data\") pod \"a8e0c494-1877-49d7-8877-308fb75d13b1\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.120852 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e0c494-1877-49d7-8877-308fb75d13b1-logs\") pod \"a8e0c494-1877-49d7-8877-308fb75d13b1\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.120873 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-combined-ca-bundle\") pod \"a8e0c494-1877-49d7-8877-308fb75d13b1\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.121288 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e0c494-1877-49d7-8877-308fb75d13b1-logs" (OuterVolumeSpecName: "logs") pod "a8e0c494-1877-49d7-8877-308fb75d13b1" (UID: "a8e0c494-1877-49d7-8877-308fb75d13b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.120942 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-scripts\") pod \"a8e0c494-1877-49d7-8877-308fb75d13b1\" (UID: \"a8e0c494-1877-49d7-8877-308fb75d13b1\") " Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.122130 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e0c494-1877-49d7-8877-308fb75d13b1-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.127215 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-scripts" (OuterVolumeSpecName: "scripts") pod "a8e0c494-1877-49d7-8877-308fb75d13b1" (UID: "a8e0c494-1877-49d7-8877-308fb75d13b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.172899 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e0c494-1877-49d7-8877-308fb75d13b1-kube-api-access-gt8rs" (OuterVolumeSpecName: "kube-api-access-gt8rs") pod "a8e0c494-1877-49d7-8877-308fb75d13b1" (UID: "a8e0c494-1877-49d7-8877-308fb75d13b1"). InnerVolumeSpecName "kube-api-access-gt8rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.179986 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-config-data" (OuterVolumeSpecName: "config-data") pod "a8e0c494-1877-49d7-8877-308fb75d13b1" (UID: "a8e0c494-1877-49d7-8877-308fb75d13b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.181537 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8e0c494-1877-49d7-8877-308fb75d13b1" (UID: "a8e0c494-1877-49d7-8877-308fb75d13b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.234687 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.234713 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.234725 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e0c494-1877-49d7-8877-308fb75d13b1-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.234733 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt8rs\" (UniqueName: \"kubernetes.io/projected/a8e0c494-1877-49d7-8877-308fb75d13b1-kube-api-access-gt8rs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.280986 4660 generic.go:334] "Generic (PLEG): container finished" podID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerID="1cd84702aaf7f54338aa618c9459e93156eccb4415121b11d56c52cd5a9fd41a" exitCode=143 Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.281134 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb3f5ee7-fb58-4344-9550-c996316c256c","Type":"ContainerDied","Data":"1cd84702aaf7f54338aa618c9459e93156eccb4415121b11d56c52cd5a9fd41a"} Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.295296 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerStarted","Data":"a81e4bd323fce6ff50e1525a46136c874bfc1cd911556a36aef3b09069cc9bd2"} Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.297281 4660 generic.go:334] "Generic (PLEG): container finished" podID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerID="41184a8dc4c3b973d66261cddd7157511309dd91d0cb1cc49776567c504bac76" exitCode=143 Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.297330 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a1e7728-2187-4765-9308-ed9c8f39dbb7","Type":"ContainerDied","Data":"41184a8dc4c3b973d66261cddd7157511309dd91d0cb1cc49776567c504bac76"} Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.335761 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vxrqr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.338762 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vxrqr" event={"ID":"a8e0c494-1877-49d7-8877-308fb75d13b1","Type":"ContainerDied","Data":"51679fd3feff33ae2d587fbfd90430a53fb6becb8bfa75064c747147ccadcc24"} Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.338804 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51679fd3feff33ae2d587fbfd90430a53fb6becb8bfa75064c747147ccadcc24" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.353683 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c4c5f6f9b-h8nfr"] Nov 29 07:40:31 crc kubenswrapper[4660]: E1129 07:40:31.354479 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" containerName="placement-db-sync" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.354548 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" containerName="placement-db-sync" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.354821 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" containerName="placement-db-sync" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.355899 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.369473 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.369726 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.369938 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.370530 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bb4k8" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.370899 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.377631 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c4c5f6f9b-h8nfr"] Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.441584 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-internal-tls-certs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.441678 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-public-tls-certs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.441705 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc52j\" (UniqueName: \"kubernetes.io/projected/c8432d67-8b8a-43f4-96b5-e852610f702c-kube-api-access-mc52j\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.441746 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8432d67-8b8a-43f4-96b5-e852610f702c-logs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.441788 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-config-data\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.441827 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-combined-ca-bundle\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.441849 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-scripts\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.543541 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-internal-tls-certs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.543591 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-public-tls-certs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.543629 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc52j\" (UniqueName: \"kubernetes.io/projected/c8432d67-8b8a-43f4-96b5-e852610f702c-kube-api-access-mc52j\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.543660 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8432d67-8b8a-43f4-96b5-e852610f702c-logs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.543696 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-config-data\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.543726 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-combined-ca-bundle\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.543751 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-scripts\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.545542 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8432d67-8b8a-43f4-96b5-e852610f702c-logs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.552150 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-scripts\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.552797 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-public-tls-certs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.553801 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-combined-ca-bundle\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.555623 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-config-data\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.557749 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8432d67-8b8a-43f4-96b5-e852610f702c-internal-tls-certs\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.562095 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc52j\" (UniqueName: \"kubernetes.io/projected/c8432d67-8b8a-43f4-96b5-e852610f702c-kube-api-access-mc52j\") pod \"placement-5c4c5f6f9b-h8nfr\" (UID: \"c8432d67-8b8a-43f4-96b5-e852610f702c\") " pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.681114 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.716725 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62c8714f-6773-4587-a147-350224a81832" path="/var/lib/kubelet/pods/62c8714f-6773-4587-a147-350224a81832/volumes" Nov 29 07:40:31 crc kubenswrapper[4660]: I1129 07:40:31.734805 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.234476 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-795c6b768d-rnj8x"] Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.236549 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.252545 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.253167 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.293114 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-795c6b768d-rnj8x"] Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.368894 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-config-data\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.369233 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmn9\" (UniqueName: \"kubernetes.io/projected/f92699d7-37a0-4093-81b8-ddb680ca5263-kube-api-access-6pmn9\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.369269 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-internal-tls-certs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.369358 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-combined-ca-bundle\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.369392 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-config-data-custom\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.369479 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f92699d7-37a0-4093-81b8-ddb680ca5263-logs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.369503 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-public-tls-certs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.437904 4660 generic.go:334] "Generic (PLEG): container finished" podID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerID="41f8aa6b255e722aedc8a45922e1fe0efe3a3d94cfac7313eebbd27e33f44404" exitCode=0 Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.438009 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a1e7728-2187-4765-9308-ed9c8f39dbb7","Type":"ContainerDied","Data":"41f8aa6b255e722aedc8a45922e1fe0efe3a3d94cfac7313eebbd27e33f44404"} Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.470569 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-combined-ca-bundle\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.470645 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-config-data-custom\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.470709 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f92699d7-37a0-4093-81b8-ddb680ca5263-logs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.470732 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-public-tls-certs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.470766 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-config-data\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.470782 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmn9\" (UniqueName: \"kubernetes.io/projected/f92699d7-37a0-4093-81b8-ddb680ca5263-kube-api-access-6pmn9\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.470799 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-internal-tls-certs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.472048 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f92699d7-37a0-4093-81b8-ddb680ca5263-logs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.474819 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9b2bdc67-626d-4aa5-94ff-d413be98dc7c","Type":"ContainerStarted","Data":"9d668746040dc7e78dcf94d846e87895ea6bc7186703884b05c4067f282643c7"} Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.487288 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-config-data\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.490145 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-config-data-custom\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.501108 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-public-tls-certs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.503568 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-combined-ca-bundle\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.503933 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92699d7-37a0-4093-81b8-ddb680ca5263-internal-tls-certs\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.507890 4660 generic.go:334] "Generic (PLEG): container finished" podID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerID="33a03a2be56a887f58bd58c3155f83a1af441933a66f0a72385717d5a0353ee0" exitCode=0 Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.507980 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb3f5ee7-fb58-4344-9550-c996316c256c","Type":"ContainerDied","Data":"33a03a2be56a887f58bd58c3155f83a1af441933a66f0a72385717d5a0353ee0"} Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.512192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmn9\" (UniqueName: \"kubernetes.io/projected/f92699d7-37a0-4093-81b8-ddb680ca5263-kube-api-access-6pmn9\") pod \"barbican-api-795c6b768d-rnj8x\" (UID: \"f92699d7-37a0-4093-81b8-ddb680ca5263\") " pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.527027 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerStarted","Data":"85382d56c7c82b7ef69a08936073d0abcfd7ae0666991e41a3168deafded5215"} Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.581125 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.670841 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.674496 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c4c5f6f9b-h8nfr"] Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.786124 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-s49ch"] Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.786379 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" podUID="05daf086-18b7-460e-8a12-519d25e17862" containerName="dnsmasq-dns" containerID="cri-o://bf4c43ba54149078240a3d3cdea24c70da0636d9a258862b30e383d9ad0aaca2" gracePeriod=10 Nov 29 07:40:32 crc kubenswrapper[4660]: I1129 07:40:32.971528 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.118569 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcvfx\" (UniqueName: \"kubernetes.io/projected/eb3f5ee7-fb58-4344-9550-c996316c256c-kube-api-access-dcvfx\") pod \"eb3f5ee7-fb58-4344-9550-c996316c256c\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.118793 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"eb3f5ee7-fb58-4344-9550-c996316c256c\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.118948 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-config-data\") pod \"eb3f5ee7-fb58-4344-9550-c996316c256c\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.118995 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-scripts\") pod \"eb3f5ee7-fb58-4344-9550-c996316c256c\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.119027 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-combined-ca-bundle\") pod \"eb3f5ee7-fb58-4344-9550-c996316c256c\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.119071 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-httpd-run\") pod \"eb3f5ee7-fb58-4344-9550-c996316c256c\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.119134 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-logs\") pod \"eb3f5ee7-fb58-4344-9550-c996316c256c\" (UID: \"eb3f5ee7-fb58-4344-9550-c996316c256c\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.120051 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb3f5ee7-fb58-4344-9550-c996316c256c" (UID: "eb3f5ee7-fb58-4344-9550-c996316c256c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.128335 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-logs" (OuterVolumeSpecName: "logs") pod "eb3f5ee7-fb58-4344-9550-c996316c256c" (UID: "eb3f5ee7-fb58-4344-9550-c996316c256c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.134817 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-scripts" (OuterVolumeSpecName: "scripts") pod "eb3f5ee7-fb58-4344-9550-c996316c256c" (UID: "eb3f5ee7-fb58-4344-9550-c996316c256c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.134929 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3f5ee7-fb58-4344-9550-c996316c256c-kube-api-access-dcvfx" (OuterVolumeSpecName: "kube-api-access-dcvfx") pod "eb3f5ee7-fb58-4344-9550-c996316c256c" (UID: "eb3f5ee7-fb58-4344-9550-c996316c256c"). InnerVolumeSpecName "kube-api-access-dcvfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.135774 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "eb3f5ee7-fb58-4344-9550-c996316c256c" (UID: "eb3f5ee7-fb58-4344-9550-c996316c256c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.213865 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb3f5ee7-fb58-4344-9550-c996316c256c" (UID: "eb3f5ee7-fb58-4344-9550-c996316c256c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.235350 4660 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.235384 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.235397 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.235412 4660 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.235423 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb3f5ee7-fb58-4344-9550-c996316c256c-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.235434 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcvfx\" (UniqueName: \"kubernetes.io/projected/eb3f5ee7-fb58-4344-9550-c996316c256c-kube-api-access-dcvfx\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.245889 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-config-data" (OuterVolumeSpecName: "config-data") pod "eb3f5ee7-fb58-4344-9550-c996316c256c" (UID: "eb3f5ee7-fb58-4344-9550-c996316c256c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.337573 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3f5ee7-fb58-4344-9550-c996316c256c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.352193 4660 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.447924 4660 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.597826 4660 generic.go:334] "Generic (PLEG): container finished" podID="05daf086-18b7-460e-8a12-519d25e17862" containerID="bf4c43ba54149078240a3d3cdea24c70da0636d9a258862b30e383d9ad0aaca2" exitCode=0 Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.597910 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" event={"ID":"05daf086-18b7-460e-8a12-519d25e17862","Type":"ContainerDied","Data":"bf4c43ba54149078240a3d3cdea24c70da0636d9a258862b30e383d9ad0aaca2"} Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.637844 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb3f5ee7-fb58-4344-9550-c996316c256c","Type":"ContainerDied","Data":"1c768356967ba564d8d4bbac4211096b70539c7efe4010335ad103dbcfb534e1"} Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.637895 4660 scope.go:117] "RemoveContainer" containerID="33a03a2be56a887f58bd58c3155f83a1af441933a66f0a72385717d5a0353ee0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.637955 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.655914 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.661214 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c4c5f6f9b-h8nfr" event={"ID":"c8432d67-8b8a-43f4-96b5-e852610f702c","Type":"ContainerStarted","Data":"b6fdcc11efc2e9e3b89a8ac25967ffcc23c3450d95f39c2274f66bd29326f45c"} Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.694489 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a1e7728-2187-4765-9308-ed9c8f39dbb7","Type":"ContainerDied","Data":"77dc1d92a1f0ab7465b9a50ce03d61483aeefa1bc9183d89b913fca2dc7f64f0"} Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.694852 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.762212 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-httpd-run\") pod \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.776253 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-combined-ca-bundle\") pod \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.776517 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-scripts\") pod \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.776688 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-config-data\") pod \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.776799 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-logs\") pod \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.776936 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.777017 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67xxs\" (UniqueName: \"kubernetes.io/projected/8a1e7728-2187-4765-9308-ed9c8f39dbb7-kube-api-access-67xxs\") pod \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\" (UID: \"8a1e7728-2187-4765-9308-ed9c8f39dbb7\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.766089 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8a1e7728-2187-4765-9308-ed9c8f39dbb7" (UID: "8a1e7728-2187-4765-9308-ed9c8f39dbb7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.778291 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-logs" (OuterVolumeSpecName: "logs") pod "8a1e7728-2187-4765-9308-ed9c8f39dbb7" (UID: "8a1e7728-2187-4765-9308-ed9c8f39dbb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.790438 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "8a1e7728-2187-4765-9308-ed9c8f39dbb7" (UID: "8a1e7728-2187-4765-9308-ed9c8f39dbb7"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.797006 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1e7728-2187-4765-9308-ed9c8f39dbb7-kube-api-access-67xxs" (OuterVolumeSpecName: "kube-api-access-67xxs") pod "8a1e7728-2187-4765-9308-ed9c8f39dbb7" (UID: "8a1e7728-2187-4765-9308-ed9c8f39dbb7"). InnerVolumeSpecName "kube-api-access-67xxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.809152 4660 scope.go:117] "RemoveContainer" containerID="1cd84702aaf7f54338aa618c9459e93156eccb4415121b11d56c52cd5a9fd41a" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.811291 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-scripts" (OuterVolumeSpecName: "scripts") pod "8a1e7728-2187-4765-9308-ed9c8f39dbb7" (UID: "8a1e7728-2187-4765-9308-ed9c8f39dbb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.831676 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.850228 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868177 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:33 crc kubenswrapper[4660]: E1129 07:40:33.868548 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-httpd" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868562 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-httpd" Nov 29 07:40:33 crc kubenswrapper[4660]: E1129 07:40:33.868574 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-log" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868581 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-log" Nov 29 07:40:33 crc kubenswrapper[4660]: E1129 07:40:33.868594 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-log" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868600 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-log" Nov 29 07:40:33 crc kubenswrapper[4660]: E1129 07:40:33.868649 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-httpd" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868656 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-httpd" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868811 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-log" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868822 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" containerName="glance-httpd" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868835 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-log" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.868848 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" containerName="glance-httpd" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.869733 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.880652 4660 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.880681 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.880692 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a1e7728-2187-4765-9308-ed9c8f39dbb7-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.880733 4660 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.880745 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67xxs\" (UniqueName: \"kubernetes.io/projected/8a1e7728-2187-4765-9308-ed9c8f39dbb7-kube-api-access-67xxs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.883303 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.885521 4660 scope.go:117] "RemoveContainer" containerID="41f8aa6b255e722aedc8a45922e1fe0efe3a3d94cfac7313eebbd27e33f44404" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.889668 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.899713 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.900011 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.900952 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a1e7728-2187-4765-9308-ed9c8f39dbb7" (UID: "8a1e7728-2187-4765-9308-ed9c8f39dbb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.979825 4660 scope.go:117] "RemoveContainer" containerID="41184a8dc4c3b973d66261cddd7157511309dd91d0cb1cc49776567c504bac76" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981144 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp6qc\" (UniqueName: \"kubernetes.io/projected/05daf086-18b7-460e-8a12-519d25e17862-kube-api-access-qp6qc\") pod \"05daf086-18b7-460e-8a12-519d25e17862\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981252 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-sb\") pod \"05daf086-18b7-460e-8a12-519d25e17862\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981319 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-svc\") pod \"05daf086-18b7-460e-8a12-519d25e17862\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981340 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-nb\") pod \"05daf086-18b7-460e-8a12-519d25e17862\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981425 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-config\") pod \"05daf086-18b7-460e-8a12-519d25e17862\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981496 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-swift-storage-0\") pod \"05daf086-18b7-460e-8a12-519d25e17862\" (UID: \"05daf086-18b7-460e-8a12-519d25e17862\") " Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981701 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqk9q\" (UniqueName: \"kubernetes.io/projected/7d6254ae-1626-4758-8200-2a9881a69ecf-kube-api-access-qqk9q\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981741 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981772 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981806 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981831 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981850 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981902 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.981939 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.982040 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:33 crc kubenswrapper[4660]: I1129 07:40:33.992245 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05daf086-18b7-460e-8a12-519d25e17862-kube-api-access-qp6qc" (OuterVolumeSpecName: "kube-api-access-qp6qc") pod "05daf086-18b7-460e-8a12-519d25e17862" (UID: "05daf086-18b7-460e-8a12-519d25e17862"). InnerVolumeSpecName "kube-api-access-qp6qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.020077 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-config-data" (OuterVolumeSpecName: "config-data") pod "8a1e7728-2187-4765-9308-ed9c8f39dbb7" (UID: "8a1e7728-2187-4765-9308-ed9c8f39dbb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.025488 4660 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.069988 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05daf086-18b7-460e-8a12-519d25e17862" (UID: "05daf086-18b7-460e-8a12-519d25e17862"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.083772 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.083871 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.085846 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.085882 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.086016 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.086108 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.086357 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqk9q\" (UniqueName: \"kubernetes.io/projected/7d6254ae-1626-4758-8200-2a9881a69ecf-kube-api-access-qqk9q\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.086431 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.087308 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp6qc\" (UniqueName: \"kubernetes.io/projected/05daf086-18b7-460e-8a12-519d25e17862-kube-api-access-qp6qc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.087355 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a1e7728-2187-4765-9308-ed9c8f39dbb7-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.087369 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.087383 4660 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.087683 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.088741 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.088945 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.093793 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.105525 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.111225 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.112329 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.122441 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "05daf086-18b7-460e-8a12-519d25e17862" (UID: "05daf086-18b7-460e-8a12-519d25e17862"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.152000 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqk9q\" (UniqueName: \"kubernetes.io/projected/7d6254ae-1626-4758-8200-2a9881a69ecf-kube-api-access-qqk9q\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.158192 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-795c6b768d-rnj8x"] Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.185587 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.191713 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.202127 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.252132 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05daf086-18b7-460e-8a12-519d25e17862" (UID: "05daf086-18b7-460e-8a12-519d25e17862"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.252421 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05daf086-18b7-460e-8a12-519d25e17862" (UID: "05daf086-18b7-460e-8a12-519d25e17862"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.268509 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-config" (OuterVolumeSpecName: "config") pod "05daf086-18b7-460e-8a12-519d25e17862" (UID: "05daf086-18b7-460e-8a12-519d25e17862"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.293917 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.293944 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.293953 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05daf086-18b7-460e-8a12-519d25e17862-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.495352 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.525487 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.626920 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:34 crc kubenswrapper[4660]: E1129 07:40:34.627337 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05daf086-18b7-460e-8a12-519d25e17862" containerName="init" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.627353 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="05daf086-18b7-460e-8a12-519d25e17862" containerName="init" Nov 29 07:40:34 crc kubenswrapper[4660]: E1129 07:40:34.627370 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05daf086-18b7-460e-8a12-519d25e17862" containerName="dnsmasq-dns" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.627375 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="05daf086-18b7-460e-8a12-519d25e17862" containerName="dnsmasq-dns" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.627554 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="05daf086-18b7-460e-8a12-519d25e17862" containerName="dnsmasq-dns" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.628451 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.631659 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.631821 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.646256 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.739005 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" event={"ID":"05daf086-18b7-460e-8a12-519d25e17862","Type":"ContainerDied","Data":"a6a123d3e2530c95c1fe4a77cb77254ac74165039f9010d05e6a02f5c4de5235"} Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.739055 4660 scope.go:117] "RemoveContainer" containerID="bf4c43ba54149078240a3d3cdea24c70da0636d9a258862b30e383d9ad0aaca2" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.739189 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.753813 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9b2bdc67-626d-4aa5-94ff-d413be98dc7c","Type":"ContainerStarted","Data":"8a36f62dbed0cb2b1249eb04432ab16485e81d069ce358efc9dcdf86798e1714"} Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.776174 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-795c6b768d-rnj8x" event={"ID":"f92699d7-37a0-4093-81b8-ddb680ca5263","Type":"ContainerStarted","Data":"c9d5a0f0fe2cae7135a1f2a977ec0eb19462470fc829395cf9c036d86249abe3"} Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.806784 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-s49ch"] Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.817589 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerStarted","Data":"43ed359f664e49c83a749a8618b55ca56f4ce5a924c4eb593e96f69a19a7a395"} Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825272 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825354 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825389 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-config-data\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825460 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-logs\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825530 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825570 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825639 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-scripts\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.825793 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8xx\" (UniqueName: \"kubernetes.io/projected/0f0c79bc-487c-4206-8a0d-1b14d7081e28-kube-api-access-tp8xx\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.839387 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-s49ch"] Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.856849 4660 scope.go:117] "RemoveContainer" containerID="0b779c5bcfec541d06e4ed32cd95698f3724b818598348f9dbded4a17d0708b6" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.862535 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c4c5f6f9b-h8nfr" event={"ID":"c8432d67-8b8a-43f4-96b5-e852610f702c","Type":"ContainerStarted","Data":"02413214f7e7e38a6895fba37a0dd272b98cbdfb2eb482758291a264d37f7b57"} Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.863632 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.863660 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.898703 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c4c5f6f9b-h8nfr" podStartSLOduration=3.898680313 podStartE2EDuration="3.898680313s" podCreationTimestamp="2025-11-29 07:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:34.885562041 +0000 UTC m=+1525.439091970" watchObservedRunningTime="2025-11-29 07:40:34.898680313 +0000 UTC m=+1525.452210212" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.927763 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.927835 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.927871 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-config-data\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.927918 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-logs\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.927953 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.927986 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.928018 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-scripts\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.928089 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8xx\" (UniqueName: \"kubernetes.io/projected/0f0c79bc-487c-4206-8a0d-1b14d7081e28-kube-api-access-tp8xx\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.929661 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.933774 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-logs\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.934330 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.936925 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.945334 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-config-data\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.959547 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-scripts\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.960316 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:34 crc kubenswrapper[4660]: I1129 07:40:34.965313 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8xx\" (UniqueName: \"kubernetes.io/projected/0f0c79bc-487c-4206-8a0d-1b14d7081e28-kube-api-access-tp8xx\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.018830 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " pod="openstack/glance-default-external-api-0" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.082553 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:40:35 crc kubenswrapper[4660]: W1129 07:40:35.168768 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d6254ae_1626_4758_8200_2a9881a69ecf.slice/crio-304312cabe57d479cf29a401c1bd70a9eb945589e6de26af59ba26a3588d94d5 WatchSource:0}: Error finding container 304312cabe57d479cf29a401c1bd70a9eb945589e6de26af59ba26a3588d94d5: Status 404 returned error can't find the container with id 304312cabe57d479cf29a401c1bd70a9eb945589e6de26af59ba26a3588d94d5 Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.274212 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.314139 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.314347 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.721321 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05daf086-18b7-460e-8a12-519d25e17862" path="/var/lib/kubelet/pods/05daf086-18b7-460e-8a12-519d25e17862/volumes" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.722237 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1e7728-2187-4765-9308-ed9c8f39dbb7" path="/var/lib/kubelet/pods/8a1e7728-2187-4765-9308-ed9c8f39dbb7/volumes" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.735652 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3f5ee7-fb58-4344-9550-c996316c256c" path="/var/lib/kubelet/pods/eb3f5ee7-fb58-4344-9550-c996316c256c/volumes" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.926051 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d6254ae-1626-4758-8200-2a9881a69ecf","Type":"ContainerStarted","Data":"304312cabe57d479cf29a401c1bd70a9eb945589e6de26af59ba26a3588d94d5"} Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.949140 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.949585 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9b2bdc67-626d-4aa5-94ff-d413be98dc7c","Type":"ContainerStarted","Data":"e4ee0f2e7037dbbca21e2ce7ee8602a367c2cb2bcad0f261164764edbb72fb02"} Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.949746 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.974467 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-795c6b768d-rnj8x" event={"ID":"f92699d7-37a0-4093-81b8-ddb680ca5263","Type":"ContainerStarted","Data":"b62e822178e400acc367d1d1589d04a2732cedda2584bf3d3cfe5ffff5aa204c"} Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.974832 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-795c6b768d-rnj8x" event={"ID":"f92699d7-37a0-4093-81b8-ddb680ca5263","Type":"ContainerStarted","Data":"4cde0657e48b0b25f8fb5b5cb5061a6ed49471b6e6d62d5fffcc2ae1add3811c"} Nov 29 07:40:35 crc kubenswrapper[4660]: I1129 07:40:35.974855 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:36 crc kubenswrapper[4660]: I1129 07:40:36.041318 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c4c5f6f9b-h8nfr" event={"ID":"c8432d67-8b8a-43f4-96b5-e852610f702c","Type":"ContainerStarted","Data":"dd4f894aa51253b7fb5c2cc72b0e57b327d9b4d9f21efe046e3820a99d0f29e3"} Nov 29 07:40:36 crc kubenswrapper[4660]: I1129 07:40:36.044225 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:36 crc kubenswrapper[4660]: I1129 07:40:36.046663 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.046638701 podStartE2EDuration="6.046638701s" podCreationTimestamp="2025-11-29 07:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:36.008542202 +0000 UTC m=+1526.562072101" watchObservedRunningTime="2025-11-29 07:40:36.046638701 +0000 UTC m=+1526.600168600" Nov 29 07:40:36 crc kubenswrapper[4660]: I1129 07:40:36.072505 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-795c6b768d-rnj8x" podStartSLOduration=4.072479952 podStartE2EDuration="4.072479952s" podCreationTimestamp="2025-11-29 07:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:36.037707154 +0000 UTC m=+1526.591237053" watchObservedRunningTime="2025-11-29 07:40:36.072479952 +0000 UTC m=+1526.626009851" Nov 29 07:40:36 crc kubenswrapper[4660]: I1129 07:40:36.131104 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:40:36 crc kubenswrapper[4660]: I1129 07:40:36.327859 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:36 crc kubenswrapper[4660]: I1129 07:40:36.328472 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.100875 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d6254ae-1626-4758-8200-2a9881a69ecf","Type":"ContainerStarted","Data":"92e6dad99bc221f8e52d0994f39256b47c1087690520886155551ba69049e740"} Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.124320 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerStarted","Data":"86379d4acf5799331a0742aa9893b6c92f9940bc1ffad16caaba0bff385c5752"} Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.124462 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.161810 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="cinder-scheduler" containerID="cri-o://650dd2807d241ca0e5ec3a42fe7b7a169cbdb6ac7bbf9a866a6066822f280d34" gracePeriod=30 Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.162196 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0f0c79bc-487c-4206-8a0d-1b14d7081e28","Type":"ContainerStarted","Data":"dca502b04916f04e594f536bff610b6a7d1f97ae40a6bcdf7a37639bea94fc9a"} Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.164079 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="probe" containerID="cri-o://0ec788f5a15dfe6e155c5581d3cec22f4464ee17c00d1d7c9db210cf86db8689" gracePeriod=30 Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.164148 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.171185 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76565fb74d-wgqb4" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 29 07:40:37 crc kubenswrapper[4660]: I1129 07:40:37.185580 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.673417923 podStartE2EDuration="9.185567069s" podCreationTimestamp="2025-11-29 07:40:28 +0000 UTC" firstStartedPulling="2025-11-29 07:40:29.301604094 +0000 UTC m=+1519.855133993" lastFinishedPulling="2025-11-29 07:40:35.81375324 +0000 UTC m=+1526.367283139" observedRunningTime="2025-11-29 07:40:37.181534138 +0000 UTC m=+1527.735064037" watchObservedRunningTime="2025-11-29 07:40:37.185567069 +0000 UTC m=+1527.739096958" Nov 29 07:40:38 crc kubenswrapper[4660]: I1129 07:40:38.188902 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0f0c79bc-487c-4206-8a0d-1b14d7081e28","Type":"ContainerStarted","Data":"c701f400fbe75878b50eea8da66c082f9b38f6e8c8cb63aa86260636bdd29941"} Nov 29 07:40:38 crc kubenswrapper[4660]: I1129 07:40:38.189242 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0f0c79bc-487c-4206-8a0d-1b14d7081e28","Type":"ContainerStarted","Data":"c8ab630cb1a0fd6ab4d421d9ebe5315e623fe43f3b2b40d3b6438aaec9398975"} Nov 29 07:40:38 crc kubenswrapper[4660]: I1129 07:40:38.193448 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d6254ae-1626-4758-8200-2a9881a69ecf","Type":"ContainerStarted","Data":"734e9488efd5033c71e4bce980b74b33eea28b63b6cbddaba2d3651f320bc92d"} Nov 29 07:40:38 crc kubenswrapper[4660]: I1129 07:40:38.216815 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.216796913 podStartE2EDuration="4.216796913s" podCreationTimestamp="2025-11-29 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:38.209796281 +0000 UTC m=+1528.763326180" watchObservedRunningTime="2025-11-29 07:40:38.216796913 +0000 UTC m=+1528.770326812" Nov 29 07:40:38 crc kubenswrapper[4660]: I1129 07:40:38.242522 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.242501931 podStartE2EDuration="5.242501931s" podCreationTimestamp="2025-11-29 07:40:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:38.235125749 +0000 UTC m=+1528.788655668" watchObservedRunningTime="2025-11-29 07:40:38.242501931 +0000 UTC m=+1528.796031830" Nov 29 07:40:38 crc kubenswrapper[4660]: I1129 07:40:38.656322 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-s49ch" podUID="05daf086-18b7-460e-8a12-519d25e17862" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.226466 4660 generic.go:334] "Generic (PLEG): container finished" podID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerID="0ec788f5a15dfe6e155c5581d3cec22f4464ee17c00d1d7c9db210cf86db8689" exitCode=0 Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.226733 4660 generic.go:334] "Generic (PLEG): container finished" podID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerID="650dd2807d241ca0e5ec3a42fe7b7a169cbdb6ac7bbf9a866a6066822f280d34" exitCode=0 Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.227717 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a829b0a-ecfa-4804-9614-7db77030e07c","Type":"ContainerDied","Data":"0ec788f5a15dfe6e155c5581d3cec22f4464ee17c00d1d7c9db210cf86db8689"} Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.227743 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a829b0a-ecfa-4804-9614-7db77030e07c","Type":"ContainerDied","Data":"650dd2807d241ca0e5ec3a42fe7b7a169cbdb6ac7bbf9a866a6066822f280d34"} Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.325207 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.458060 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data-custom\") pod \"6a829b0a-ecfa-4804-9614-7db77030e07c\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.458245 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a829b0a-ecfa-4804-9614-7db77030e07c-etc-machine-id\") pod \"6a829b0a-ecfa-4804-9614-7db77030e07c\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.458330 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a829b0a-ecfa-4804-9614-7db77030e07c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a829b0a-ecfa-4804-9614-7db77030e07c" (UID: "6a829b0a-ecfa-4804-9614-7db77030e07c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.458279 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data\") pod \"6a829b0a-ecfa-4804-9614-7db77030e07c\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.458385 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-combined-ca-bundle\") pod \"6a829b0a-ecfa-4804-9614-7db77030e07c\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.458862 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7bh4\" (UniqueName: \"kubernetes.io/projected/6a829b0a-ecfa-4804-9614-7db77030e07c-kube-api-access-g7bh4\") pod \"6a829b0a-ecfa-4804-9614-7db77030e07c\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.458993 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-scripts\") pod \"6a829b0a-ecfa-4804-9614-7db77030e07c\" (UID: \"6a829b0a-ecfa-4804-9614-7db77030e07c\") " Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.459379 4660 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a829b0a-ecfa-4804-9614-7db77030e07c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.471845 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6a829b0a-ecfa-4804-9614-7db77030e07c" (UID: "6a829b0a-ecfa-4804-9614-7db77030e07c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.481365 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-scripts" (OuterVolumeSpecName: "scripts") pod "6a829b0a-ecfa-4804-9614-7db77030e07c" (UID: "6a829b0a-ecfa-4804-9614-7db77030e07c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.487779 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a829b0a-ecfa-4804-9614-7db77030e07c-kube-api-access-g7bh4" (OuterVolumeSpecName: "kube-api-access-g7bh4") pod "6a829b0a-ecfa-4804-9614-7db77030e07c" (UID: "6a829b0a-ecfa-4804-9614-7db77030e07c"). InnerVolumeSpecName "kube-api-access-g7bh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.536796 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a829b0a-ecfa-4804-9614-7db77030e07c" (UID: "6a829b0a-ecfa-4804-9614-7db77030e07c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.561222 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.561249 4660 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.561258 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.561267 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7bh4\" (UniqueName: \"kubernetes.io/projected/6a829b0a-ecfa-4804-9614-7db77030e07c-kube-api-access-g7bh4\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.651450 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data" (OuterVolumeSpecName: "config-data") pod "6a829b0a-ecfa-4804-9614-7db77030e07c" (UID: "6a829b0a-ecfa-4804-9614-7db77030e07c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.663358 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a829b0a-ecfa-4804-9614-7db77030e07c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:39 crc kubenswrapper[4660]: I1129 07:40:39.949075 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.240879 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a829b0a-ecfa-4804-9614-7db77030e07c","Type":"ContainerDied","Data":"12900f65fd5d4143dbeb9ec301f5bdcbc462ae86ef253a0402711e625f0c30c0"} Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.240930 4660 scope.go:117] "RemoveContainer" containerID="0ec788f5a15dfe6e155c5581d3cec22f4464ee17c00d1d7c9db210cf86db8689" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.241078 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.264771 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.264983 4660 scope.go:117] "RemoveContainer" containerID="650dd2807d241ca0e5ec3a42fe7b7a169cbdb6ac7bbf9a866a6066822f280d34" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.275921 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.323094 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:40 crc kubenswrapper[4660]: E1129 07:40:40.323819 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="cinder-scheduler" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.323832 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="cinder-scheduler" Nov 29 07:40:40 crc kubenswrapper[4660]: E1129 07:40:40.323846 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="probe" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.323853 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="probe" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.324180 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="probe" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.324214 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" containerName="cinder-scheduler" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.325557 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.329450 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.350746 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.369689 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.376202 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g2rl\" (UniqueName: \"kubernetes.io/projected/b255ded3-2849-4f46-bb45-5c2485862b55-kube-api-access-4g2rl\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.376290 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.376341 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-scripts\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.376441 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-config-data\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.376508 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.376656 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b255ded3-2849-4f46-bb45-5c2485862b55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.412002 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54fd458c48-wjcjs" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.488542 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b255ded3-2849-4f46-bb45-5c2485862b55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.489097 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g2rl\" (UniqueName: \"kubernetes.io/projected/b255ded3-2849-4f46-bb45-5c2485862b55-kube-api-access-4g2rl\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.489142 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.489172 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-scripts\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.489198 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-config-data\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.489234 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.490207 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b255ded3-2849-4f46-bb45-5c2485862b55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.500727 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-config-data\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.504809 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.507815 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-scripts\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.510408 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b255ded3-2849-4f46-bb45-5c2485862b55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.513192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g2rl\" (UniqueName: \"kubernetes.io/projected/b255ded3-2849-4f46-bb45-5c2485862b55-kube-api-access-4g2rl\") pod \"cinder-scheduler-0\" (UID: \"b255ded3-2849-4f46-bb45-5c2485862b55\") " pod="openstack/cinder-scheduler-0" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.546188 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:40 crc kubenswrapper[4660]: I1129 07:40:40.670588 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 29 07:40:41 crc kubenswrapper[4660]: I1129 07:40:41.197827 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 29 07:40:41 crc kubenswrapper[4660]: W1129 07:40:41.211582 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb255ded3_2849_4f46_bb45_5c2485862b55.slice/crio-3ff242d41665d96c134e32720e30ae1fd0d4195b72dc3db7dab7afa410a5c46a WatchSource:0}: Error finding container 3ff242d41665d96c134e32720e30ae1fd0d4195b72dc3db7dab7afa410a5c46a: Status 404 returned error can't find the container with id 3ff242d41665d96c134e32720e30ae1fd0d4195b72dc3db7dab7afa410a5c46a Nov 29 07:40:41 crc kubenswrapper[4660]: I1129 07:40:41.261882 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b255ded3-2849-4f46-bb45-5c2485862b55","Type":"ContainerStarted","Data":"3ff242d41665d96c134e32720e30ae1fd0d4195b72dc3db7dab7afa410a5c46a"} Nov 29 07:40:41 crc kubenswrapper[4660]: I1129 07:40:41.600601 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:40:41 crc kubenswrapper[4660]: I1129 07:40:41.703465 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a829b0a-ecfa-4804-9614-7db77030e07c" path="/var/lib/kubelet/pods/6a829b0a-ecfa-4804-9614-7db77030e07c/volumes" Nov 29 07:40:42 crc kubenswrapper[4660]: I1129 07:40:42.385706 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b255ded3-2849-4f46-bb45-5c2485862b55","Type":"ContainerStarted","Data":"a49ac9289b5ddfaed36b19f23bc3ccdfb2681723cbe44a7d130bbe848571c980"} Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.414104 4660 generic.go:334] "Generic (PLEG): container finished" podID="50a095fb-8968-4986-b063-8652e7e2cd0b" containerID="799ebf3bd6cc4f594dd6ed33f3794df92352c8f1e93d05ef46ab674c6481c3b6" exitCode=0 Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.414171 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g6zvg" event={"ID":"50a095fb-8968-4986-b063-8652e7e2cd0b","Type":"ContainerDied","Data":"799ebf3bd6cc4f594dd6ed33f3794df92352c8f1e93d05ef46ab674c6481c3b6"} Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.416438 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b255ded3-2849-4f46-bb45-5c2485862b55","Type":"ContainerStarted","Data":"f863c4a026ef77b19164ab384f49e4d3ced0786207bc96911d4cd94b30d86ef7"} Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.464508 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.464488765 podStartE2EDuration="3.464488765s" podCreationTimestamp="2025-11-29 07:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:43.461723679 +0000 UTC m=+1534.015253578" watchObservedRunningTime="2025-11-29 07:40:43.464488765 +0000 UTC m=+1534.018018664" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.908344 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.909357 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.913582 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.913594 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.913783 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-thj6k" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.965104 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.976937 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d541b23c-6413-4bee-834c-96e5d46a9155-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.977199 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d541b23c-6413-4bee-834c-96e5d46a9155-openstack-config\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.977404 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmtgv\" (UniqueName: \"kubernetes.io/projected/d541b23c-6413-4bee-834c-96e5d46a9155-kube-api-access-mmtgv\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:43 crc kubenswrapper[4660]: I1129 07:40:43.977487 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d541b23c-6413-4bee-834c-96e5d46a9155-openstack-config-secret\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.079015 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmtgv\" (UniqueName: \"kubernetes.io/projected/d541b23c-6413-4bee-834c-96e5d46a9155-kube-api-access-mmtgv\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.079095 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d541b23c-6413-4bee-834c-96e5d46a9155-openstack-config-secret\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.079137 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d541b23c-6413-4bee-834c-96e5d46a9155-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.079204 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d541b23c-6413-4bee-834c-96e5d46a9155-openstack-config\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.080293 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d541b23c-6413-4bee-834c-96e5d46a9155-openstack-config\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.094954 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d541b23c-6413-4bee-834c-96e5d46a9155-openstack-config-secret\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.095850 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d541b23c-6413-4bee-834c-96e5d46a9155-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.100858 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5d8477fd94-v56g5" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.105871 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmtgv\" (UniqueName: \"kubernetes.io/projected/d541b23c-6413-4bee-834c-96e5d46a9155-kube-api-access-mmtgv\") pod \"openstackclient\" (UID: \"d541b23c-6413-4bee-834c-96e5d46a9155\") " pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.186810 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76565fb74d-wgqb4"] Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.187291 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76565fb74d-wgqb4" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon-log" containerID="cri-o://a06fa5bc5dea81f87eb50d48ff9fc0f67ec231eb279da20f46810fc9e7f222f0" gracePeriod=30 Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.193130 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76565fb74d-wgqb4" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon" containerID="cri-o://156c64ed6d999d268ab91dc231927009499a9200a4ba906f1bf8c8a8b4315a1f" gracePeriod=30 Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.204209 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.204249 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.247010 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.264376 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.274329 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.430913 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:44 crc kubenswrapper[4660]: I1129 07:40:44.430953 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:44 crc kubenswrapper[4660]: W1129 07:40:44.998567 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd541b23c_6413_4bee_834c_96e5d46a9155.slice/crio-25c451cda3f6224073dd3b7beb7c6dca1ef1621ed911774cc812240f55215ca1 WatchSource:0}: Error finding container 25c451cda3f6224073dd3b7beb7c6dca1ef1621ed911774cc812240f55215ca1: Status 404 returned error can't find the container with id 25c451cda3f6224073dd3b7beb7c6dca1ef1621ed911774cc812240f55215ca1 Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.002252 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.002684 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.061413 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="9b2bdc67-626d-4aa5-94ff-d413be98dc7c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.159:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.117129 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-798sh\" (UniqueName: \"kubernetes.io/projected/50a095fb-8968-4986-b063-8652e7e2cd0b-kube-api-access-798sh\") pod \"50a095fb-8968-4986-b063-8652e7e2cd0b\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.117266 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-combined-ca-bundle\") pod \"50a095fb-8968-4986-b063-8652e7e2cd0b\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.117345 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-config\") pod \"50a095fb-8968-4986-b063-8652e7e2cd0b\" (UID: \"50a095fb-8968-4986-b063-8652e7e2cd0b\") " Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.127796 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a095fb-8968-4986-b063-8652e7e2cd0b-kube-api-access-798sh" (OuterVolumeSpecName: "kube-api-access-798sh") pod "50a095fb-8968-4986-b063-8652e7e2cd0b" (UID: "50a095fb-8968-4986-b063-8652e7e2cd0b"). InnerVolumeSpecName "kube-api-access-798sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.165126 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50a095fb-8968-4986-b063-8652e7e2cd0b" (UID: "50a095fb-8968-4986-b063-8652e7e2cd0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.206873 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-config" (OuterVolumeSpecName: "config") pod "50a095fb-8968-4986-b063-8652e7e2cd0b" (UID: "50a095fb-8968-4986-b063-8652e7e2cd0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.219937 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-798sh\" (UniqueName: \"kubernetes.io/projected/50a095fb-8968-4986-b063-8652e7e2cd0b-kube-api-access-798sh\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.220033 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.220092 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/50a095fb-8968-4986-b063-8652e7e2cd0b-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.276351 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.276408 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.324784 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.324841 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.438834 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g6zvg" event={"ID":"50a095fb-8968-4986-b063-8652e7e2cd0b","Type":"ContainerDied","Data":"dc63e6a5dbf9db7cde6a6c10feb6b0f7e7fdf53e0a0b047e6972f778a44392b2"} Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.439121 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc63e6a5dbf9db7cde6a6c10feb6b0f7e7fdf53e0a0b047e6972f778a44392b2" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.438871 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g6zvg" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.440760 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d541b23c-6413-4bee-834c-96e5d46a9155","Type":"ContainerStarted","Data":"25c451cda3f6224073dd3b7beb7c6dca1ef1621ed911774cc812240f55215ca1"} Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.441809 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.441851 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.671873 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.714809 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xj7bl"] Nov 29 07:40:45 crc kubenswrapper[4660]: E1129 07:40:45.715214 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a095fb-8968-4986-b063-8652e7e2cd0b" containerName="neutron-db-sync" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.715230 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a095fb-8968-4986-b063-8652e7e2cd0b" containerName="neutron-db-sync" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.715437 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a095fb-8968-4986-b063-8652e7e2cd0b" containerName="neutron-db-sync" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.718479 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.749988 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xj7bl"] Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.845331 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.845764 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-config\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.856753 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.856842 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n247f\" (UniqueName: \"kubernetes.io/projected/d5cded14-8a67-4297-b354-a7ed6aa91e74-kube-api-access-n247f\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.863973 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-svc\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.864012 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.864756 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c46598bd8-gq9r5"] Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.877518 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.884109 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.884377 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fwv82" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.889172 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.889760 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.912617 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c46598bd8-gq9r5"] Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967555 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967611 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n247f\" (UniqueName: \"kubernetes.io/projected/d5cded14-8a67-4297-b354-a7ed6aa91e74-kube-api-access-n247f\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967670 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-httpd-config\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967697 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-ovndb-tls-certs\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967744 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-svc\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967761 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967790 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967809 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56pj\" (UniqueName: \"kubernetes.io/projected/277aaa4d-9633-4735-a6a5-b08a968b69e1-kube-api-access-v56pj\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967840 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-combined-ca-bundle\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967876 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-config\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.967895 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-config\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.969096 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.969311 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.969671 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-config\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.969872 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-svc\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:45 crc kubenswrapper[4660]: I1129 07:40:45.970429 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.020168 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n247f\" (UniqueName: \"kubernetes.io/projected/d5cded14-8a67-4297-b354-a7ed6aa91e74-kube-api-access-n247f\") pod \"dnsmasq-dns-6578955fd5-xj7bl\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.057993 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9b2bdc67-626d-4aa5-94ff-d413be98dc7c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.159:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.058743 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.069761 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-ovndb-tls-certs\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.069863 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v56pj\" (UniqueName: \"kubernetes.io/projected/277aaa4d-9633-4735-a6a5-b08a968b69e1-kube-api-access-v56pj\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.069887 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-combined-ca-bundle\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.069924 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-config\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.070005 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-httpd-config\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.083418 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-combined-ca-bundle\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.084585 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-httpd-config\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.085897 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-ovndb-tls-certs\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.090315 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-config\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.104818 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v56pj\" (UniqueName: \"kubernetes.io/projected/277aaa4d-9633-4735-a6a5-b08a968b69e1-kube-api-access-v56pj\") pod \"neutron-7c46598bd8-gq9r5\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.244044 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.455816 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.455837 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.599940 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.600392 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:46 crc kubenswrapper[4660]: W1129 07:40:46.703174 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5cded14_8a67_4297_b354_a7ed6aa91e74.slice/crio-c2c9bd784dccce5e514e62018af950ab21c6c6904df77f72367096842d0c0d37 WatchSource:0}: Error finding container c2c9bd784dccce5e514e62018af950ab21c6c6904df77f72367096842d0c0d37: Status 404 returned error can't find the container with id c2c9bd784dccce5e514e62018af950ab21c6c6904df77f72367096842d0c0d37 Nov 29 07:40:46 crc kubenswrapper[4660]: I1129 07:40:46.711594 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xj7bl"] Nov 29 07:40:47 crc kubenswrapper[4660]: I1129 07:40:47.023737 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c46598bd8-gq9r5"] Nov 29 07:40:47 crc kubenswrapper[4660]: I1129 07:40:47.501850 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c46598bd8-gq9r5" event={"ID":"277aaa4d-9633-4735-a6a5-b08a968b69e1","Type":"ContainerStarted","Data":"11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec"} Nov 29 07:40:47 crc kubenswrapper[4660]: I1129 07:40:47.502105 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c46598bd8-gq9r5" event={"ID":"277aaa4d-9633-4735-a6a5-b08a968b69e1","Type":"ContainerStarted","Data":"1703acf144b31ddf3d8172ac442c5708aab0a9be20d063b29b003262d1e63555"} Nov 29 07:40:47 crc kubenswrapper[4660]: I1129 07:40:47.505717 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" event={"ID":"d5cded14-8a67-4297-b354-a7ed6aa91e74","Type":"ContainerStarted","Data":"3fde0db0485b916482161c15d71856e3dbbc5907376cf19e50f16fa8b1c20dc3"} Nov 29 07:40:47 crc kubenswrapper[4660]: I1129 07:40:47.505749 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" event={"ID":"d5cded14-8a67-4297-b354-a7ed6aa91e74","Type":"ContainerStarted","Data":"c2c9bd784dccce5e514e62018af950ab21c6c6904df77f72367096842d0c0d37"} Nov 29 07:40:47 crc kubenswrapper[4660]: I1129 07:40:47.600872 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:47 crc kubenswrapper[4660]: I1129 07:40:47.601492 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:48 crc kubenswrapper[4660]: I1129 07:40:48.520549 4660 generic.go:334] "Generic (PLEG): container finished" podID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerID="3fde0db0485b916482161c15d71856e3dbbc5907376cf19e50f16fa8b1c20dc3" exitCode=0 Nov 29 07:40:48 crc kubenswrapper[4660]: I1129 07:40:48.520606 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" event={"ID":"d5cded14-8a67-4297-b354-a7ed6aa91e74","Type":"ContainerDied","Data":"3fde0db0485b916482161c15d71856e3dbbc5907376cf19e50f16fa8b1c20dc3"} Nov 29 07:40:48 crc kubenswrapper[4660]: I1129 07:40:48.530287 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c46598bd8-gq9r5" event={"ID":"277aaa4d-9633-4735-a6a5-b08a968b69e1","Type":"ContainerStarted","Data":"480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d"} Nov 29 07:40:48 crc kubenswrapper[4660]: I1129 07:40:48.530780 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.090862 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c46598bd8-gq9r5" podStartSLOduration=4.090844061 podStartE2EDuration="4.090844061s" podCreationTimestamp="2025-11-29 07:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:48.588395721 +0000 UTC m=+1539.141925620" watchObservedRunningTime="2025-11-29 07:40:49.090844061 +0000 UTC m=+1539.644373960" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.097670 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d5bfc6bd5-zc4q8"] Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.099093 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.107278 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.107416 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.124780 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d5bfc6bd5-zc4q8"] Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.273329 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-public-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.273633 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-internal-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.273691 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpldc\" (UniqueName: \"kubernetes.io/projected/b601f952-5ec7-401c-b639-01245efb2379-kube-api-access-zpldc\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.273767 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-ovndb-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.273843 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-config\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.273888 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-combined-ca-bundle\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.273926 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-httpd-config\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.394535 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-config\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.394631 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-combined-ca-bundle\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.394676 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-httpd-config\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.394802 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-public-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.394841 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-internal-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.394892 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpldc\" (UniqueName: \"kubernetes.io/projected/b601f952-5ec7-401c-b639-01245efb2379-kube-api-access-zpldc\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.394945 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-ovndb-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.414123 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-internal-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.414710 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-ovndb-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.429903 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-httpd-config\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.431970 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-public-tls-certs\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.460401 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpldc\" (UniqueName: \"kubernetes.io/projected/b601f952-5ec7-401c-b639-01245efb2379-kube-api-access-zpldc\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.474749 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-combined-ca-bundle\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.480824 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b601f952-5ec7-401c-b639-01245efb2379-config\") pod \"neutron-7d5bfc6bd5-zc4q8\" (UID: \"b601f952-5ec7-401c-b639-01245efb2379\") " pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.561947 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" event={"ID":"d5cded14-8a67-4297-b354-a7ed6aa91e74","Type":"ContainerStarted","Data":"6312fdc8b886c1b30c0d7fac3db759b94b2025028543912fd69aaea68375a7a2"} Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.562855 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.608393 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" podStartSLOduration=4.608375665 podStartE2EDuration="4.608375665s" podCreationTimestamp="2025-11-29 07:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:49.604034336 +0000 UTC m=+1540.157564255" watchObservedRunningTime="2025-11-29 07:40:49.608375665 +0000 UTC m=+1540.161905564" Nov 29 07:40:49 crc kubenswrapper[4660]: I1129 07:40:49.775989 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:50 crc kubenswrapper[4660]: I1129 07:40:50.103044 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="9b2bdc67-626d-4aa5-94ff-d413be98dc7c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.159:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:50 crc kubenswrapper[4660]: I1129 07:40:50.504969 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d5bfc6bd5-zc4q8"] Nov 29 07:40:50 crc kubenswrapper[4660]: I1129 07:40:50.585490 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d5bfc6bd5-zc4q8" event={"ID":"b601f952-5ec7-401c-b639-01245efb2379","Type":"ContainerStarted","Data":"b18a7ae42ef786a04855a68b9212a9938ac094449e0eee93a0e33543ff38dba7"} Nov 29 07:40:51 crc kubenswrapper[4660]: I1129 07:40:51.105772 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9b2bdc67-626d-4aa5-94ff-d413be98dc7c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.159:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:51 crc kubenswrapper[4660]: I1129 07:40:51.122440 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 29 07:40:51 crc kubenswrapper[4660]: I1129 07:40:51.602454 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d5bfc6bd5-zc4q8" event={"ID":"b601f952-5ec7-401c-b639-01245efb2379","Type":"ContainerStarted","Data":"1e8f3520daf74043cfde068fc306d26f2efd35fa3e9f45e804814355e7663687"} Nov 29 07:40:51 crc kubenswrapper[4660]: I1129 07:40:51.609867 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:51 crc kubenswrapper[4660]: I1129 07:40:51.610282 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.069421 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.331786 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-5d8477fd94-v56g5" podUID="953f9580-5907-45bf-ae44-e48149acc44c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.332378 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5d8477fd94-v56g5" podUID="953f9580-5907-45bf-ae44-e48149acc44c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.611018 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d5bfc6bd5-zc4q8" event={"ID":"b601f952-5ec7-401c-b639-01245efb2379","Type":"ContainerStarted","Data":"a773f73c0c1a982793a4265a809eaf218824d7c414348916836de9d1024c71dc"} Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.612675 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.615841 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.615848 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-795c6b768d-rnj8x" podUID="f92699d7-37a0-4093-81b8-ddb680ca5263" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.638206 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d5bfc6bd5-zc4q8" podStartSLOduration=3.638188581 podStartE2EDuration="3.638188581s" podCreationTimestamp="2025-11-29 07:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:40:52.636949587 +0000 UTC m=+1543.190479496" watchObservedRunningTime="2025-11-29 07:40:52.638188581 +0000 UTC m=+1543.191718480" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.679023 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.684377 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-795c6b768d-rnj8x" Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.772675 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9c946b766-6n2bk"] Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.777287 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" containerID="cri-o://63271c9e08c57b7c765be46e23c65aa9aa329af5e01f1c8a1646076995cd95b5" gracePeriod=30 Nov 29 07:40:52 crc kubenswrapper[4660]: I1129 07:40:52.777736 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api" containerID="cri-o://46c07f3efd0ec9a7da11e54c925e9e525e9f543d7dbb4dc48f67851681658bc3" gracePeriod=30 Nov 29 07:40:53 crc kubenswrapper[4660]: E1129 07:40:53.224002 4660 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83bbeb12_7456_4c22_8d8d_06e569201498.slice/crio-conmon-63271c9e08c57b7c765be46e23c65aa9aa329af5e01f1c8a1646076995cd95b5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83bbeb12_7456_4c22_8d8d_06e569201498.slice/crio-63271c9e08c57b7c765be46e23c65aa9aa329af5e01f1c8a1646076995cd95b5.scope\": RecentStats: unable to find data in memory cache]" Nov 29 07:40:53 crc kubenswrapper[4660]: I1129 07:40:53.633358 4660 generic.go:334] "Generic (PLEG): container finished" podID="83bbeb12-7456-4c22-8d8d-06e569201498" containerID="63271c9e08c57b7c765be46e23c65aa9aa329af5e01f1c8a1646076995cd95b5" exitCode=143 Nov 29 07:40:53 crc kubenswrapper[4660]: I1129 07:40:53.634669 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c946b766-6n2bk" event={"ID":"83bbeb12-7456-4c22-8d8d-06e569201498","Type":"ContainerDied","Data":"63271c9e08c57b7c765be46e23c65aa9aa329af5e01f1c8a1646076995cd95b5"} Nov 29 07:40:55 crc kubenswrapper[4660]: I1129 07:40:55.118704 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:55 crc kubenswrapper[4660]: I1129 07:40:55.119073 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:40:55 crc kubenswrapper[4660]: I1129 07:40:55.120461 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:40:55 crc kubenswrapper[4660]: I1129 07:40:55.122780 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:40:55 crc kubenswrapper[4660]: I1129 07:40:55.122902 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:40:55 crc kubenswrapper[4660]: I1129 07:40:55.248513 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.060744 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.215699 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq"] Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.216567 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" podUID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerName="dnsmasq-dns" containerID="cri-o://1dc3033bf1d2b802bc727bd7216a1fcaa5c37a0197d8f9a0293c9823dbe86f60" gracePeriod=10 Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.437449 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:59942->10.217.0.152:9311: read: connection reset by peer" Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.437700 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9c946b766-6n2bk" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:59940->10.217.0.152:9311: read: connection reset by peer" Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.669683 4660 generic.go:334] "Generic (PLEG): container finished" podID="83bbeb12-7456-4c22-8d8d-06e569201498" containerID="46c07f3efd0ec9a7da11e54c925e9e525e9f543d7dbb4dc48f67851681658bc3" exitCode=0 Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.669746 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c946b766-6n2bk" event={"ID":"83bbeb12-7456-4c22-8d8d-06e569201498","Type":"ContainerDied","Data":"46c07f3efd0ec9a7da11e54c925e9e525e9f543d7dbb4dc48f67851681658bc3"} Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.671483 4660 generic.go:334] "Generic (PLEG): container finished" podID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerID="1dc3033bf1d2b802bc727bd7216a1fcaa5c37a0197d8f9a0293c9823dbe86f60" exitCode=0 Nov 29 07:40:56 crc kubenswrapper[4660]: I1129 07:40:56.671501 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" event={"ID":"e039299e-a02d-4f10-aa3a-d755d77cc9ac","Type":"ContainerDied","Data":"1dc3033bf1d2b802bc727bd7216a1fcaa5c37a0197d8f9a0293c9823dbe86f60"} Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.096192 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.113564 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188431 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-combined-ca-bundle\") pod \"83bbeb12-7456-4c22-8d8d-06e569201498\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188512 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvxdv\" (UniqueName: \"kubernetes.io/projected/83bbeb12-7456-4c22-8d8d-06e569201498-kube-api-access-zvxdv\") pod \"83bbeb12-7456-4c22-8d8d-06e569201498\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188568 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-config\") pod \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188641 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-nb\") pod \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188701 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-swift-storage-0\") pod \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188766 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data-custom\") pod \"83bbeb12-7456-4c22-8d8d-06e569201498\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188809 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83bbeb12-7456-4c22-8d8d-06e569201498-logs\") pod \"83bbeb12-7456-4c22-8d8d-06e569201498\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188855 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-sb\") pod \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188919 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-svc\") pod \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.188984 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data\") pod \"83bbeb12-7456-4c22-8d8d-06e569201498\" (UID: \"83bbeb12-7456-4c22-8d8d-06e569201498\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.189050 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gfv6\" (UniqueName: \"kubernetes.io/projected/e039299e-a02d-4f10-aa3a-d755d77cc9ac-kube-api-access-8gfv6\") pod \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\" (UID: \"e039299e-a02d-4f10-aa3a-d755d77cc9ac\") " Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.224215 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83bbeb12-7456-4c22-8d8d-06e569201498-logs" (OuterVolumeSpecName: "logs") pod "83bbeb12-7456-4c22-8d8d-06e569201498" (UID: "83bbeb12-7456-4c22-8d8d-06e569201498"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.231503 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bbeb12-7456-4c22-8d8d-06e569201498-kube-api-access-zvxdv" (OuterVolumeSpecName: "kube-api-access-zvxdv") pod "83bbeb12-7456-4c22-8d8d-06e569201498" (UID: "83bbeb12-7456-4c22-8d8d-06e569201498"). InnerVolumeSpecName "kube-api-access-zvxdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.233659 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83bbeb12-7456-4c22-8d8d-06e569201498" (UID: "83bbeb12-7456-4c22-8d8d-06e569201498"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.234241 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e039299e-a02d-4f10-aa3a-d755d77cc9ac-kube-api-access-8gfv6" (OuterVolumeSpecName: "kube-api-access-8gfv6") pod "e039299e-a02d-4f10-aa3a-d755d77cc9ac" (UID: "e039299e-a02d-4f10-aa3a-d755d77cc9ac"). InnerVolumeSpecName "kube-api-access-8gfv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.299921 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gfv6\" (UniqueName: \"kubernetes.io/projected/e039299e-a02d-4f10-aa3a-d755d77cc9ac-kube-api-access-8gfv6\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.299954 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvxdv\" (UniqueName: \"kubernetes.io/projected/83bbeb12-7456-4c22-8d8d-06e569201498-kube-api-access-zvxdv\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.299963 4660 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.299970 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83bbeb12-7456-4c22-8d8d-06e569201498-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.323371 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e039299e-a02d-4f10-aa3a-d755d77cc9ac" (UID: "e039299e-a02d-4f10-aa3a-d755d77cc9ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.331026 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e039299e-a02d-4f10-aa3a-d755d77cc9ac" (UID: "e039299e-a02d-4f10-aa3a-d755d77cc9ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.358020 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-config" (OuterVolumeSpecName: "config") pod "e039299e-a02d-4f10-aa3a-d755d77cc9ac" (UID: "e039299e-a02d-4f10-aa3a-d755d77cc9ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.359229 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83bbeb12-7456-4c22-8d8d-06e569201498" (UID: "83bbeb12-7456-4c22-8d8d-06e569201498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.380533 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data" (OuterVolumeSpecName: "config-data") pod "83bbeb12-7456-4c22-8d8d-06e569201498" (UID: "83bbeb12-7456-4c22-8d8d-06e569201498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.390113 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e039299e-a02d-4f10-aa3a-d755d77cc9ac" (UID: "e039299e-a02d-4f10-aa3a-d755d77cc9ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.401531 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.401565 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.401574 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bbeb12-7456-4c22-8d8d-06e569201498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.401583 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.401592 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.401603 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.412715 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e039299e-a02d-4f10-aa3a-d755d77cc9ac" (UID: "e039299e-a02d-4f10-aa3a-d755d77cc9ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.502674 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e039299e-a02d-4f10-aa3a-d755d77cc9ac-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.680988 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c946b766-6n2bk" event={"ID":"83bbeb12-7456-4c22-8d8d-06e569201498","Type":"ContainerDied","Data":"a6bd390f64572a61401cdd839775000daac71280b7507dff45e0df1144559afd"} Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.681035 4660 scope.go:117] "RemoveContainer" containerID="46c07f3efd0ec9a7da11e54c925e9e525e9f543d7dbb4dc48f67851681658bc3" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.681054 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c946b766-6n2bk" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.691601 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" event={"ID":"e039299e-a02d-4f10-aa3a-d755d77cc9ac","Type":"ContainerDied","Data":"52dbf3b62b81f099c942dad074afca350382a273f78c419bf1baf0e2929e1520"} Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.691681 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.737522 4660 scope.go:117] "RemoveContainer" containerID="63271c9e08c57b7c765be46e23c65aa9aa329af5e01f1c8a1646076995cd95b5" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.741100 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9c946b766-6n2bk"] Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.767793 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9c946b766-6n2bk"] Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.790914 4660 scope.go:117] "RemoveContainer" containerID="1dc3033bf1d2b802bc727bd7216a1fcaa5c37a0197d8f9a0293c9823dbe86f60" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.834031 4660 scope.go:117] "RemoveContainer" containerID="8e8933b2fec35223ce9ddbf8531693404ddbd1959810a7b5e6aa15240eb192e3" Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.855544 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq"] Nov 29 07:40:57 crc kubenswrapper[4660]: I1129 07:40:57.874477 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-f6wcq"] Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.418661 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-75ddc44955-xj8mn"] Nov 29 07:40:58 crc kubenswrapper[4660]: E1129 07:40:58.419016 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerName="dnsmasq-dns" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.419028 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerName="dnsmasq-dns" Nov 29 07:40:58 crc kubenswrapper[4660]: E1129 07:40:58.419045 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.419051 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api" Nov 29 07:40:58 crc kubenswrapper[4660]: E1129 07:40:58.419068 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerName="init" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.419074 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerName="init" Nov 29 07:40:58 crc kubenswrapper[4660]: E1129 07:40:58.419092 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.419098 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.419250 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.419261 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" containerName="dnsmasq-dns" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.419276 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" containerName="barbican-api-log" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.420215 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.422664 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.422906 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.423051 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.477189 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75ddc44955-xj8mn"] Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535223 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-public-tls-certs\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535265 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/27a79873-e3bd-4172-b5c3-17a981a9a091-etc-swift\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535293 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzlk\" (UniqueName: \"kubernetes.io/projected/27a79873-e3bd-4172-b5c3-17a981a9a091-kube-api-access-ctzlk\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535317 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a79873-e3bd-4172-b5c3-17a981a9a091-run-httpd\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535362 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-internal-tls-certs\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535387 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-config-data\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535429 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-combined-ca-bundle\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.535468 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a79873-e3bd-4172-b5c3-17a981a9a091-log-httpd\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.591420 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.636807 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-combined-ca-bundle\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.636914 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a79873-e3bd-4172-b5c3-17a981a9a091-log-httpd\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.636994 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-public-tls-certs\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.637018 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/27a79873-e3bd-4172-b5c3-17a981a9a091-etc-swift\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.637063 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctzlk\" (UniqueName: \"kubernetes.io/projected/27a79873-e3bd-4172-b5c3-17a981a9a091-kube-api-access-ctzlk\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.637104 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a79873-e3bd-4172-b5c3-17a981a9a091-run-httpd\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.637175 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-internal-tls-certs\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.637203 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-config-data\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.647679 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a79873-e3bd-4172-b5c3-17a981a9a091-run-httpd\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.647768 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a79873-e3bd-4172-b5c3-17a981a9a091-log-httpd\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.654029 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-combined-ca-bundle\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.671013 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-internal-tls-certs\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.671103 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-config-data\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.671724 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a79873-e3bd-4172-b5c3-17a981a9a091-public-tls-certs\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.674033 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/27a79873-e3bd-4172-b5c3-17a981a9a091-etc-swift\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.674808 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctzlk\" (UniqueName: \"kubernetes.io/projected/27a79873-e3bd-4172-b5c3-17a981a9a091-kube-api-access-ctzlk\") pod \"swift-proxy-75ddc44955-xj8mn\" (UID: \"27a79873-e3bd-4172-b5c3-17a981a9a091\") " pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:58 crc kubenswrapper[4660]: I1129 07:40:58.741054 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:40:59 crc kubenswrapper[4660]: I1129 07:40:59.705005 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bbeb12-7456-4c22-8d8d-06e569201498" path="/var/lib/kubelet/pods/83bbeb12-7456-4c22-8d8d-06e569201498/volumes" Nov 29 07:40:59 crc kubenswrapper[4660]: I1129 07:40:59.705621 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e039299e-a02d-4f10-aa3a-d755d77cc9ac" path="/var/lib/kubelet/pods/e039299e-a02d-4f10-aa3a-d755d77cc9ac/volumes" Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.279268 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.279818 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-central-agent" containerID="cri-o://a81e4bd323fce6ff50e1525a46136c874bfc1cd911556a36aef3b09069cc9bd2" gracePeriod=30 Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.279847 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="proxy-httpd" containerID="cri-o://86379d4acf5799331a0742aa9893b6c92f9940bc1ffad16caaba0bff385c5752" gracePeriod=30 Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.279919 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-notification-agent" containerID="cri-o://85382d56c7c82b7ef69a08936073d0abcfd7ae0666991e41a3168deafded5215" gracePeriod=30 Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.279930 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="sg-core" containerID="cri-o://43ed359f664e49c83a749a8618b55ca56f4ce5a924c4eb593e96f69a19a7a395" gracePeriod=30 Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.748430 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerID="86379d4acf5799331a0742aa9893b6c92f9940bc1ffad16caaba0bff385c5752" exitCode=0 Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.748464 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerID="43ed359f664e49c83a749a8618b55ca56f4ce5a924c4eb593e96f69a19a7a395" exitCode=2 Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.748474 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerID="a81e4bd323fce6ff50e1525a46136c874bfc1cd911556a36aef3b09069cc9bd2" exitCode=0 Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.748499 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerDied","Data":"86379d4acf5799331a0742aa9893b6c92f9940bc1ffad16caaba0bff385c5752"} Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.748530 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerDied","Data":"43ed359f664e49c83a749a8618b55ca56f4ce5a924c4eb593e96f69a19a7a395"} Nov 29 07:41:00 crc kubenswrapper[4660]: I1129 07:41:00.748542 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerDied","Data":"a81e4bd323fce6ff50e1525a46136c874bfc1cd911556a36aef3b09069cc9bd2"} Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.479385 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.479997 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" containerName="kube-state-metrics" containerID="cri-o://c9072f8214b449617015915bc0e53754d53d2811c556a0ad8a2de25d921f9148" gracePeriod=30 Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.500385 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.500434 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.813268 4660 generic.go:334] "Generic (PLEG): container finished" podID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" containerID="c9072f8214b449617015915bc0e53754d53d2811c556a0ad8a2de25d921f9148" exitCode=2 Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.813349 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e","Type":"ContainerDied","Data":"c9072f8214b449617015915bc0e53754d53d2811c556a0ad8a2de25d921f9148"} Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.816247 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerID="85382d56c7c82b7ef69a08936073d0abcfd7ae0666991e41a3168deafded5215" exitCode=0 Nov 29 07:41:05 crc kubenswrapper[4660]: I1129 07:41:05.816283 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerDied","Data":"85382d56c7c82b7ef69a08936073d0abcfd7ae0666991e41a3168deafded5215"} Nov 29 07:41:06 crc kubenswrapper[4660]: I1129 07:41:06.279629 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:41:06 crc kubenswrapper[4660]: I1129 07:41:06.297351 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c4c5f6f9b-h8nfr" Nov 29 07:41:06 crc kubenswrapper[4660]: I1129 07:41:06.895806 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:41:06 crc kubenswrapper[4660]: I1129 07:41:06.945641 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db584\" (UniqueName: \"kubernetes.io/projected/d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e-kube-api-access-db584\") pod \"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e\" (UID: \"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e\") " Nov 29 07:41:06 crc kubenswrapper[4660]: I1129 07:41:06.973466 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e-kube-api-access-db584" (OuterVolumeSpecName: "kube-api-access-db584") pod "d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" (UID: "d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e"). InnerVolumeSpecName "kube-api-access-db584". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.049541 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db584\" (UniqueName: \"kubernetes.io/projected/d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e-kube-api-access-db584\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.088672 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.150581 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-combined-ca-bundle\") pod \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.150701 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-sg-core-conf-yaml\") pod \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.150728 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-log-httpd\") pod \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.150752 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-run-httpd\") pod \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.150820 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-config-data\") pod \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.150895 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-scripts\") pod \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.150986 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkd6c\" (UniqueName: \"kubernetes.io/projected/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-kube-api-access-vkd6c\") pod \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\" (UID: \"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6\") " Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.151649 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" (UID: "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.152072 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" (UID: "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.158537 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-scripts" (OuterVolumeSpecName: "scripts") pod "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" (UID: "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.159694 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-kube-api-access-vkd6c" (OuterVolumeSpecName: "kube-api-access-vkd6c") pod "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" (UID: "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6"). InnerVolumeSpecName "kube-api-access-vkd6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.249806 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" (UID: "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.252969 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.252997 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkd6c\" (UniqueName: \"kubernetes.io/projected/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-kube-api-access-vkd6c\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.253012 4660 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.253024 4660 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.253039 4660 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.288983 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" (UID: "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.313337 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-config-data" (OuterVolumeSpecName: "config-data") pod "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" (UID: "0f6b1638-4e8a-4b9f-9391-c94be29b9cd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.356222 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.356270 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.422684 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75ddc44955-xj8mn"] Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.840284 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e","Type":"ContainerDied","Data":"96ee88940dcc7ef7225b5c00ad83cc5a3a300c617bb37e4f7d57bcef91c252c8"} Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.840348 4660 scope.go:117] "RemoveContainer" containerID="c9072f8214b449617015915bc0e53754d53d2811c556a0ad8a2de25d921f9148" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.840306 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.847395 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f6b1638-4e8a-4b9f-9391-c94be29b9cd6","Type":"ContainerDied","Data":"0f723a8e689412e18eb3c5ec8058e3cfd349c7efee72dcb14e1d446a074d0058"} Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.847474 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.851037 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d541b23c-6413-4bee-834c-96e5d46a9155","Type":"ContainerStarted","Data":"87d5ce89e85c97c572b4dc6221f719288996b3710ff3116dd66886b15335180e"} Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.858657 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75ddc44955-xj8mn" event={"ID":"27a79873-e3bd-4172-b5c3-17a981a9a091","Type":"ContainerStarted","Data":"4e359c8a948d007f7803772391ee12e55fa0ad4a6f704dc1389c4c7ec318abfb"} Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.858708 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75ddc44955-xj8mn" event={"ID":"27a79873-e3bd-4172-b5c3-17a981a9a091","Type":"ContainerStarted","Data":"f8c28c5c18f47e1d9c322c2067b704f5037f933f43a4e273c74f80d134cc1459"} Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.868116 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.878528 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.881157 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.895002 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.908472 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:41:07 crc kubenswrapper[4660]: E1129 07:41:07.908870 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-notification-agent" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.908882 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-notification-agent" Nov 29 07:41:07 crc kubenswrapper[4660]: E1129 07:41:07.908898 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-central-agent" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.908905 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-central-agent" Nov 29 07:41:07 crc kubenswrapper[4660]: E1129 07:41:07.908922 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" containerName="kube-state-metrics" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.908928 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" containerName="kube-state-metrics" Nov 29 07:41:07 crc kubenswrapper[4660]: E1129 07:41:07.908937 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="proxy-httpd" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.908942 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="proxy-httpd" Nov 29 07:41:07 crc kubenswrapper[4660]: E1129 07:41:07.908954 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="sg-core" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.908969 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="sg-core" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.909212 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-notification-agent" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.909227 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="ceilometer-central-agent" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.909242 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="proxy-httpd" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.909252 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" containerName="sg-core" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.909264 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" containerName="kube-state-metrics" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.909867 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.911375 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xcl2b" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.912237 4660 scope.go:117] "RemoveContainer" containerID="86379d4acf5799331a0742aa9893b6c92f9940bc1ffad16caaba0bff385c5752" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.923421 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.923658 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.922761 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.482397604 podStartE2EDuration="24.922738607s" podCreationTimestamp="2025-11-29 07:40:43 +0000 UTC" firstStartedPulling="2025-11-29 07:40:45.000799093 +0000 UTC m=+1535.554329012" lastFinishedPulling="2025-11-29 07:41:06.441140126 +0000 UTC m=+1556.994670015" observedRunningTime="2025-11-29 07:41:07.892927396 +0000 UTC m=+1558.446457315" watchObservedRunningTime="2025-11-29 07:41:07.922738607 +0000 UTC m=+1558.476268506" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.962031 4660 scope.go:117] "RemoveContainer" containerID="43ed359f664e49c83a749a8618b55ca56f4ce5a924c4eb593e96f69a19a7a395" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.966802 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbz7m\" (UniqueName: \"kubernetes.io/projected/d65ebb5a-68a4-4848-8093-92d49f373550-kube-api-access-wbz7m\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.966907 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.967107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.967204 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.974896 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:07 crc kubenswrapper[4660]: I1129 07:41:07.986065 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:07.998951 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:07.999580 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.023844 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.034292 4660 scope.go:117] "RemoveContainer" containerID="85382d56c7c82b7ef69a08936073d0abcfd7ae0666991e41a3168deafded5215" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.041839 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.069397 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-config-data\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.069650 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.069767 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-log-httpd\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.069839 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.069949 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-scripts\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.070227 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.070317 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.072147 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbz7m\" (UniqueName: \"kubernetes.io/projected/d65ebb5a-68a4-4848-8093-92d49f373550-kube-api-access-wbz7m\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.073442 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.073508 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st6wg\" (UniqueName: \"kubernetes.io/projected/10f5ed0b-0073-4e3a-81be-33a945a78101-kube-api-access-st6wg\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.073625 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-run-httpd\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.077790 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.078913 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.084861 4660 scope.go:117] "RemoveContainer" containerID="a81e4bd323fce6ff50e1525a46136c874bfc1cd911556a36aef3b09069cc9bd2" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.096284 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d65ebb5a-68a4-4848-8093-92d49f373550-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.097493 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbz7m\" (UniqueName: \"kubernetes.io/projected/d65ebb5a-68a4-4848-8093-92d49f373550-kube-api-access-wbz7m\") pod \"kube-state-metrics-0\" (UID: \"d65ebb5a-68a4-4848-8093-92d49f373550\") " pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.175461 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6wg\" (UniqueName: \"kubernetes.io/projected/10f5ed0b-0073-4e3a-81be-33a945a78101-kube-api-access-st6wg\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.175553 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-run-httpd\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.175629 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-config-data\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.175661 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-log-httpd\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.175684 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.175750 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-scripts\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.175778 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.176193 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-run-httpd\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.176276 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-log-httpd\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.179463 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.180155 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.180925 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-config-data\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.184705 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-scripts\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.193438 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st6wg\" (UniqueName: \"kubernetes.io/projected/10f5ed0b-0073-4e3a-81be-33a945a78101-kube-api-access-st6wg\") pod \"ceilometer-0\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.275130 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.373327 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.756893 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.774122 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.876632 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75ddc44955-xj8mn" event={"ID":"27a79873-e3bd-4172-b5c3-17a981a9a091","Type":"ContainerStarted","Data":"0b335c6827232c9101e5a43a840aca37281b4ba49aa087951f6d0f5bb5f5c211"} Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.876832 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.876913 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.882478 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d65ebb5a-68a4-4848-8093-92d49f373550","Type":"ContainerStarted","Data":"c0de72897edfad526296db0b7737327d6a69a242180401391ff052d3500a8b34"} Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.907491 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-75ddc44955-xj8mn" podStartSLOduration=10.907470931 podStartE2EDuration="10.907470931s" podCreationTimestamp="2025-11-29 07:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:08.893139517 +0000 UTC m=+1559.446669436" watchObservedRunningTime="2025-11-29 07:41:08.907470931 +0000 UTC m=+1559.461000830" Nov 29 07:41:08 crc kubenswrapper[4660]: I1129 07:41:08.939944 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:08 crc kubenswrapper[4660]: W1129 07:41:08.940380 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10f5ed0b_0073_4e3a_81be_33a945a78101.slice/crio-d684cd181f1b27e1a1d6346985eb1caa09db57c8c46d98f73c785d9940ee2a03 WatchSource:0}: Error finding container d684cd181f1b27e1a1d6346985eb1caa09db57c8c46d98f73c785d9940ee2a03: Status 404 returned error can't find the container with id d684cd181f1b27e1a1d6346985eb1caa09db57c8c46d98f73c785d9940ee2a03 Nov 29 07:41:09 crc kubenswrapper[4660]: I1129 07:41:09.710916 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6b1638-4e8a-4b9f-9391-c94be29b9cd6" path="/var/lib/kubelet/pods/0f6b1638-4e8a-4b9f-9391-c94be29b9cd6/volumes" Nov 29 07:41:09 crc kubenswrapper[4660]: I1129 07:41:09.711922 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e" path="/var/lib/kubelet/pods/d7f07db1-9bb5-4a2d-ab6f-62d7cec3c34e/volumes" Nov 29 07:41:09 crc kubenswrapper[4660]: I1129 07:41:09.897881 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerStarted","Data":"89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b"} Nov 29 07:41:09 crc kubenswrapper[4660]: I1129 07:41:09.898020 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerStarted","Data":"d684cd181f1b27e1a1d6346985eb1caa09db57c8c46d98f73c785d9940ee2a03"} Nov 29 07:41:09 crc kubenswrapper[4660]: I1129 07:41:09.899700 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d65ebb5a-68a4-4848-8093-92d49f373550","Type":"ContainerStarted","Data":"0f468aaec004252ac8871a0d8f65bb20fd86ab934dec8be4c76123b1c6908fd7"} Nov 29 07:41:10 crc kubenswrapper[4660]: I1129 07:41:10.709043 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.943522599 podStartE2EDuration="3.709024399s" podCreationTimestamp="2025-11-29 07:41:07 +0000 UTC" firstStartedPulling="2025-11-29 07:41:08.794214874 +0000 UTC m=+1559.347744763" lastFinishedPulling="2025-11-29 07:41:09.559716664 +0000 UTC m=+1560.113246563" observedRunningTime="2025-11-29 07:41:09.930930922 +0000 UTC m=+1560.484460821" watchObservedRunningTime="2025-11-29 07:41:10.709024399 +0000 UTC m=+1561.262554298" Nov 29 07:41:10 crc kubenswrapper[4660]: I1129 07:41:10.710157 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:41:10 crc kubenswrapper[4660]: I1129 07:41:10.710362 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-log" containerID="cri-o://92e6dad99bc221f8e52d0994f39256b47c1087690520886155551ba69049e740" gracePeriod=30 Nov 29 07:41:10 crc kubenswrapper[4660]: I1129 07:41:10.710746 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-httpd" containerID="cri-o://734e9488efd5033c71e4bce980b74b33eea28b63b6cbddaba2d3651f320bc92d" gracePeriod=30 Nov 29 07:41:10 crc kubenswrapper[4660]: I1129 07:41:10.908860 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 29 07:41:11 crc kubenswrapper[4660]: I1129 07:41:11.920806 4660 generic.go:334] "Generic (PLEG): container finished" podID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerID="92e6dad99bc221f8e52d0994f39256b47c1087690520886155551ba69049e740" exitCode=143 Nov 29 07:41:11 crc kubenswrapper[4660]: I1129 07:41:11.920896 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d6254ae-1626-4758-8200-2a9881a69ecf","Type":"ContainerDied","Data":"92e6dad99bc221f8e52d0994f39256b47c1087690520886155551ba69049e740"} Nov 29 07:41:11 crc kubenswrapper[4660]: I1129 07:41:11.923834 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerStarted","Data":"a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044"} Nov 29 07:41:13 crc kubenswrapper[4660]: I1129 07:41:13.753378 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:41:13 crc kubenswrapper[4660]: I1129 07:41:13.753431 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75ddc44955-xj8mn" Nov 29 07:41:14 crc kubenswrapper[4660]: I1129 07:41:14.964598 4660 generic.go:334] "Generic (PLEG): container finished" podID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerID="734e9488efd5033c71e4bce980b74b33eea28b63b6cbddaba2d3651f320bc92d" exitCode=0 Nov 29 07:41:14 crc kubenswrapper[4660]: I1129 07:41:14.964684 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d6254ae-1626-4758-8200-2a9881a69ecf","Type":"ContainerDied","Data":"734e9488efd5033c71e4bce980b74b33eea28b63b6cbddaba2d3651f320bc92d"} Nov 29 07:41:14 crc kubenswrapper[4660]: I1129 07:41:14.968635 4660 generic.go:334] "Generic (PLEG): container finished" podID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerID="156c64ed6d999d268ab91dc231927009499a9200a4ba906f1bf8c8a8b4315a1f" exitCode=137 Nov 29 07:41:14 crc kubenswrapper[4660]: I1129 07:41:14.968661 4660 generic.go:334] "Generic (PLEG): container finished" podID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerID="a06fa5bc5dea81f87eb50d48ff9fc0f67ec231eb279da20f46810fc9e7f222f0" exitCode=137 Nov 29 07:41:14 crc kubenswrapper[4660]: I1129 07:41:14.968683 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76565fb74d-wgqb4" event={"ID":"3b1c3a22-b3b7-4403-b4d5-263d822b3fab","Type":"ContainerDied","Data":"156c64ed6d999d268ab91dc231927009499a9200a4ba906f1bf8c8a8b4315a1f"} Nov 29 07:41:14 crc kubenswrapper[4660]: I1129 07:41:14.968706 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76565fb74d-wgqb4" event={"ID":"3b1c3a22-b3b7-4403-b4d5-263d822b3fab","Type":"ContainerDied","Data":"a06fa5bc5dea81f87eb50d48ff9fc0f67ec231eb279da20f46810fc9e7f222f0"} Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.362793 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.463764 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-scripts\") pod \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.463826 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-tls-certs\") pod \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.463868 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-config-data\") pod \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.463898 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-secret-key\") pod \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.463951 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8dr6\" (UniqueName: \"kubernetes.io/projected/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-kube-api-access-b8dr6\") pod \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.463984 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-logs\") pod \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.464027 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-combined-ca-bundle\") pod \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\" (UID: \"3b1c3a22-b3b7-4403-b4d5-263d822b3fab\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.466317 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-logs" (OuterVolumeSpecName: "logs") pod "3b1c3a22-b3b7-4403-b4d5-263d822b3fab" (UID: "3b1c3a22-b3b7-4403-b4d5-263d822b3fab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.473723 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3b1c3a22-b3b7-4403-b4d5-263d822b3fab" (UID: "3b1c3a22-b3b7-4403-b4d5-263d822b3fab"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.475776 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-kube-api-access-b8dr6" (OuterVolumeSpecName: "kube-api-access-b8dr6") pod "3b1c3a22-b3b7-4403-b4d5-263d822b3fab" (UID: "3b1c3a22-b3b7-4403-b4d5-263d822b3fab"). InnerVolumeSpecName "kube-api-access-b8dr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.493198 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.497320 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-config-data" (OuterVolumeSpecName: "config-data") pod "3b1c3a22-b3b7-4403-b4d5-263d822b3fab" (UID: "3b1c3a22-b3b7-4403-b4d5-263d822b3fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.499006 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1c3a22-b3b7-4403-b4d5-263d822b3fab" (UID: "3b1c3a22-b3b7-4403-b4d5-263d822b3fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.509084 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-scripts" (OuterVolumeSpecName: "scripts") pod "3b1c3a22-b3b7-4403-b4d5-263d822b3fab" (UID: "3b1c3a22-b3b7-4403-b4d5-263d822b3fab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.542804 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3b1c3a22-b3b7-4403-b4d5-263d822b3fab" (UID: "3b1c3a22-b3b7-4403-b4d5-263d822b3fab"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.566397 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.566439 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.566484 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.566495 4660 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.566507 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.566518 4660 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.566528 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8dr6\" (UniqueName: \"kubernetes.io/projected/3b1c3a22-b3b7-4403-b4d5-263d822b3fab-kube-api-access-b8dr6\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667493 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-httpd-run\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667578 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqk9q\" (UniqueName: \"kubernetes.io/projected/7d6254ae-1626-4758-8200-2a9881a69ecf-kube-api-access-qqk9q\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667737 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-combined-ca-bundle\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667762 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-logs\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667784 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667805 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-internal-tls-certs\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667826 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-config-data\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667873 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-scripts\") pod \"7d6254ae-1626-4758-8200-2a9881a69ecf\" (UID: \"7d6254ae-1626-4758-8200-2a9881a69ecf\") " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.667991 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.668237 4660 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.669700 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-logs" (OuterVolumeSpecName: "logs") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.674529 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6254ae-1626-4758-8200-2a9881a69ecf-kube-api-access-qqk9q" (OuterVolumeSpecName: "kube-api-access-qqk9q") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "kube-api-access-qqk9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.674630 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-scripts" (OuterVolumeSpecName: "scripts") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.674763 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.713505 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.743964 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.748074 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-config-data" (OuterVolumeSpecName: "config-data") pod "7d6254ae-1626-4758-8200-2a9881a69ecf" (UID: "7d6254ae-1626-4758-8200-2a9881a69ecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.770575 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqk9q\" (UniqueName: \"kubernetes.io/projected/7d6254ae-1626-4758-8200-2a9881a69ecf-kube-api-access-qqk9q\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.770638 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.770650 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6254ae-1626-4758-8200-2a9881a69ecf-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.770678 4660 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.770688 4660 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.770724 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.770732 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6254ae-1626-4758-8200-2a9881a69ecf-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.793157 4660 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.871929 4660 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.981122 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d6254ae-1626-4758-8200-2a9881a69ecf","Type":"ContainerDied","Data":"304312cabe57d479cf29a401c1bd70a9eb945589e6de26af59ba26a3588d94d5"} Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.981158 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.981445 4660 scope.go:117] "RemoveContainer" containerID="734e9488efd5033c71e4bce980b74b33eea28b63b6cbddaba2d3651f320bc92d" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.988234 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76565fb74d-wgqb4" event={"ID":"3b1c3a22-b3b7-4403-b4d5-263d822b3fab","Type":"ContainerDied","Data":"95b8917e45e9412fe92688284361f7cad245c6b37f35eb1b2cd71e7d2843fa4d"} Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.988306 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76565fb74d-wgqb4" Nov 29 07:41:15 crc kubenswrapper[4660]: I1129 07:41:15.996289 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerStarted","Data":"62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee"} Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.037867 4660 scope.go:117] "RemoveContainer" containerID="92e6dad99bc221f8e52d0994f39256b47c1087690520886155551ba69049e740" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.065351 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.065831 4660 scope.go:117] "RemoveContainer" containerID="156c64ed6d999d268ab91dc231927009499a9200a4ba906f1bf8c8a8b4315a1f" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.075482 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.086906 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76565fb74d-wgqb4"] Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.103220 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76565fb74d-wgqb4"] Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.110806 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:41:16 crc kubenswrapper[4660]: E1129 07:41:16.111500 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111528 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon" Nov 29 07:41:16 crc kubenswrapper[4660]: E1129 07:41:16.111567 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon-log" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111577 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon-log" Nov 29 07:41:16 crc kubenswrapper[4660]: E1129 07:41:16.111594 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-httpd" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111601 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-httpd" Nov 29 07:41:16 crc kubenswrapper[4660]: E1129 07:41:16.111647 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-log" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111654 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-log" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111856 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon-log" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111880 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-httpd" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111894 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" containerName="horizon" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.111907 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" containerName="glance-log" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.113359 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.118952 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.119509 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.119808 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.257683 4660 scope.go:117] "RemoveContainer" containerID="a06fa5bc5dea81f87eb50d48ff9fc0f67ec231eb279da20f46810fc9e7f222f0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.260826 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.279500 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.279554 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.279783 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.279839 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.279875 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.280105 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.280309 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.280390 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztftm\" (UniqueName: \"kubernetes.io/projected/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-kube-api-access-ztftm\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.381848 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.382367 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.382310 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.383168 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztftm\" (UniqueName: \"kubernetes.io/projected/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-kube-api-access-ztftm\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.383282 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.383329 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.383443 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.383482 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.383515 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.384173 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.384339 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.394774 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.396621 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.405187 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.405280 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztftm\" (UniqueName: \"kubernetes.io/projected/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-kube-api-access-ztftm\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.406298 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ec421d-c491-4c1f-9f9d-ec260df3cc87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.485846 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2ec421d-c491-4c1f-9f9d-ec260df3cc87\") " pod="openstack/glance-default-internal-api-0" Nov 29 07:41:16 crc kubenswrapper[4660]: I1129 07:41:16.742789 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.020228 4660 generic.go:334] "Generic (PLEG): container finished" podID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerID="97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6" exitCode=1 Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.020540 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerDied","Data":"97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6"} Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.020699 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-central-agent" containerID="cri-o://89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b" gracePeriod=30 Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.021151 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="sg-core" containerID="cri-o://62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee" gracePeriod=30 Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.021192 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-notification-agent" containerID="cri-o://a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044" gracePeriod=30 Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.306229 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.716898 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1c3a22-b3b7-4403-b4d5-263d822b3fab" path="/var/lib/kubelet/pods/3b1c3a22-b3b7-4403-b4d5-263d822b3fab/volumes" Nov 29 07:41:17 crc kubenswrapper[4660]: I1129 07:41:17.717961 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6254ae-1626-4758-8200-2a9881a69ecf" path="/var/lib/kubelet/pods/7d6254ae-1626-4758-8200-2a9881a69ecf/volumes" Nov 29 07:41:18 crc kubenswrapper[4660]: I1129 07:41:18.058312 4660 generic.go:334] "Generic (PLEG): container finished" podID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerID="62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee" exitCode=2 Nov 29 07:41:18 crc kubenswrapper[4660]: I1129 07:41:18.058651 4660 generic.go:334] "Generic (PLEG): container finished" podID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerID="a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044" exitCode=0 Nov 29 07:41:18 crc kubenswrapper[4660]: I1129 07:41:18.058345 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerDied","Data":"62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee"} Nov 29 07:41:18 crc kubenswrapper[4660]: I1129 07:41:18.058727 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerDied","Data":"a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044"} Nov 29 07:41:18 crc kubenswrapper[4660]: I1129 07:41:18.064801 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2ec421d-c491-4c1f-9f9d-ec260df3cc87","Type":"ContainerStarted","Data":"74b57c0ec4bcc97f3cebcf6a5f3f41ce9febbac75c657e6e180dc3a7869867a3"} Nov 29 07:41:18 crc kubenswrapper[4660]: I1129 07:41:18.064838 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2ec421d-c491-4c1f-9f9d-ec260df3cc87","Type":"ContainerStarted","Data":"0804570e8fdc522f8abaad37e263fbd79f40fe89e0f2ebbbd8185b829427076b"} Nov 29 07:41:18 crc kubenswrapper[4660]: I1129 07:41:18.284078 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.078372 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2ec421d-c491-4c1f-9f9d-ec260df3cc87","Type":"ContainerStarted","Data":"79f78c913357db2edfa9113fb7cb3a43bba786e03416868380a8ea2571665cd0"} Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.113674 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.113651594 podStartE2EDuration="3.113651594s" podCreationTimestamp="2025-11-29 07:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:19.101193523 +0000 UTC m=+1569.654723432" watchObservedRunningTime="2025-11-29 07:41:19.113651594 +0000 UTC m=+1569.667181503" Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.349241 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.349536 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-log" containerID="cri-o://c8ab630cb1a0fd6ab4d421d9ebe5315e623fe43f3b2b40d3b6438aaec9398975" gracePeriod=30 Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.350410 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-httpd" containerID="cri-o://c701f400fbe75878b50eea8da66c082f9b38f6e8c8cb63aa86260636bdd29941" gracePeriod=30 Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.792968 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d5bfc6bd5-zc4q8" Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.859842 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c46598bd8-gq9r5"] Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.860243 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c46598bd8-gq9r5" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-api" containerID="cri-o://11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec" gracePeriod=30 Nov 29 07:41:19 crc kubenswrapper[4660]: I1129 07:41:19.860324 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c46598bd8-gq9r5" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-httpd" containerID="cri-o://480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d" gracePeriod=30 Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.067756 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.155867 4660 generic.go:334] "Generic (PLEG): container finished" podID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerID="89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b" exitCode=0 Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.156705 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerDied","Data":"89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b"} Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.156804 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10f5ed0b-0073-4e3a-81be-33a945a78101","Type":"ContainerDied","Data":"d684cd181f1b27e1a1d6346985eb1caa09db57c8c46d98f73c785d9940ee2a03"} Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.156920 4660 scope.go:117] "RemoveContainer" containerID="97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.157127 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.158226 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-config-data\") pod \"10f5ed0b-0073-4e3a-81be-33a945a78101\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.158432 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-log-httpd\") pod \"10f5ed0b-0073-4e3a-81be-33a945a78101\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.158591 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-scripts\") pod \"10f5ed0b-0073-4e3a-81be-33a945a78101\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.158721 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-run-httpd\") pod \"10f5ed0b-0073-4e3a-81be-33a945a78101\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.158839 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-sg-core-conf-yaml\") pod \"10f5ed0b-0073-4e3a-81be-33a945a78101\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.158926 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st6wg\" (UniqueName: \"kubernetes.io/projected/10f5ed0b-0073-4e3a-81be-33a945a78101-kube-api-access-st6wg\") pod \"10f5ed0b-0073-4e3a-81be-33a945a78101\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.159001 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-combined-ca-bundle\") pod \"10f5ed0b-0073-4e3a-81be-33a945a78101\" (UID: \"10f5ed0b-0073-4e3a-81be-33a945a78101\") " Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.162262 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "10f5ed0b-0073-4e3a-81be-33a945a78101" (UID: "10f5ed0b-0073-4e3a-81be-33a945a78101"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.181071 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "10f5ed0b-0073-4e3a-81be-33a945a78101" (UID: "10f5ed0b-0073-4e3a-81be-33a945a78101"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.187163 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-scripts" (OuterVolumeSpecName: "scripts") pod "10f5ed0b-0073-4e3a-81be-33a945a78101" (UID: "10f5ed0b-0073-4e3a-81be-33a945a78101"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.206927 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f5ed0b-0073-4e3a-81be-33a945a78101-kube-api-access-st6wg" (OuterVolumeSpecName: "kube-api-access-st6wg") pod "10f5ed0b-0073-4e3a-81be-33a945a78101" (UID: "10f5ed0b-0073-4e3a-81be-33a945a78101"). InnerVolumeSpecName "kube-api-access-st6wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.209216 4660 generic.go:334] "Generic (PLEG): container finished" podID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerID="480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d" exitCode=0 Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.209304 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c46598bd8-gq9r5" event={"ID":"277aaa4d-9633-4735-a6a5-b08a968b69e1","Type":"ContainerDied","Data":"480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d"} Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.254888 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerID="c8ab630cb1a0fd6ab4d421d9ebe5315e623fe43f3b2b40d3b6438aaec9398975" exitCode=143 Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.255710 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0f0c79bc-487c-4206-8a0d-1b14d7081e28","Type":"ContainerDied","Data":"c8ab630cb1a0fd6ab4d421d9ebe5315e623fe43f3b2b40d3b6438aaec9398975"} Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.263826 4660 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.264179 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.264257 4660 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10f5ed0b-0073-4e3a-81be-33a945a78101-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.264311 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st6wg\" (UniqueName: \"kubernetes.io/projected/10f5ed0b-0073-4e3a-81be-33a945a78101-kube-api-access-st6wg\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.285877 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "10f5ed0b-0073-4e3a-81be-33a945a78101" (UID: "10f5ed0b-0073-4e3a-81be-33a945a78101"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.307803 4660 scope.go:117] "RemoveContainer" containerID="62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.366779 4660 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.394962 4660 scope.go:117] "RemoveContainer" containerID="a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.428430 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10f5ed0b-0073-4e3a-81be-33a945a78101" (UID: "10f5ed0b-0073-4e3a-81be-33a945a78101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.443268 4660 scope.go:117] "RemoveContainer" containerID="89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.468876 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.469348 4660 scope.go:117] "RemoveContainer" containerID="97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6" Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.476833 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6\": container with ID starting with 97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6 not found: ID does not exist" containerID="97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.476892 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6"} err="failed to get container status \"97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6\": rpc error: code = NotFound desc = could not find container \"97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6\": container with ID starting with 97ab9588e7b1a315d9d00e2cf384a67007f2ce2cd74c1f5da2496be3ddc625b6 not found: ID does not exist" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.476922 4660 scope.go:117] "RemoveContainer" containerID="62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.477095 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-config-data" (OuterVolumeSpecName: "config-data") pod "10f5ed0b-0073-4e3a-81be-33a945a78101" (UID: "10f5ed0b-0073-4e3a-81be-33a945a78101"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.477415 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee\": container with ID starting with 62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee not found: ID does not exist" containerID="62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.477447 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee"} err="failed to get container status \"62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee\": rpc error: code = NotFound desc = could not find container \"62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee\": container with ID starting with 62047133562a16d2ae7c44c7bade6c1fd1dbc5501b145d2f290890bb8f97f8ee not found: ID does not exist" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.477464 4660 scope.go:117] "RemoveContainer" containerID="a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044" Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.478325 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044\": container with ID starting with a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044 not found: ID does not exist" containerID="a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.478361 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044"} err="failed to get container status \"a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044\": rpc error: code = NotFound desc = could not find container \"a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044\": container with ID starting with a613ff652893b91a34544c5d18e0dcf494c0daf82602da85fa9daa87f1c95044 not found: ID does not exist" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.478383 4660 scope.go:117] "RemoveContainer" containerID="89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b" Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.478999 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b\": container with ID starting with 89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b not found: ID does not exist" containerID="89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.479032 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b"} err="failed to get container status \"89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b\": rpc error: code = NotFound desc = could not find container \"89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b\": container with ID starting with 89aa56086a95e5a8ca073cd0e3311321fa05f0198ad503c5208d6ea31340780b not found: ID does not exist" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.570824 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f5ed0b-0073-4e3a-81be-33a945a78101-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.824532 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.833450 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843089 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.843455 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-notification-agent" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843472 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-notification-agent" Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.843496 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="sg-core" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843502 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="sg-core" Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.843517 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="proxy-httpd" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843522 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="proxy-httpd" Nov 29 07:41:20 crc kubenswrapper[4660]: E1129 07:41:20.843543 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-central-agent" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843549 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-central-agent" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843752 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-notification-agent" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843780 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="ceilometer-central-agent" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843792 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="proxy-httpd" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.843808 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" containerName="sg-core" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.845669 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.848652 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.848806 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.850571 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.856178 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.982132 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5zsc\" (UniqueName: \"kubernetes.io/projected/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-kube-api-access-w5zsc\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.982420 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-log-httpd\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.982526 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.982668 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-config-data\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.982781 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.982974 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-run-httpd\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.983071 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-scripts\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:20 crc kubenswrapper[4660]: I1129 07:41:20.983167 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.084682 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-run-httpd\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.085379 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-run-httpd\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.085406 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-scripts\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.085629 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.085777 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5zsc\" (UniqueName: \"kubernetes.io/projected/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-kube-api-access-w5zsc\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.085883 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-log-httpd\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.085983 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.086077 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-config-data\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.086156 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.086518 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-log-httpd\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.091340 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-scripts\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.091844 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.092638 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.093133 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.097022 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-config-data\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.109406 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5zsc\" (UniqueName: \"kubernetes.io/projected/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-kube-api-access-w5zsc\") pod \"ceilometer-0\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.167993 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.654443 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:21 crc kubenswrapper[4660]: I1129 07:41:21.703681 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f5ed0b-0073-4e3a-81be-33a945a78101" path="/var/lib/kubelet/pods/10f5ed0b-0073-4e3a-81be-33a945a78101/volumes" Nov 29 07:41:22 crc kubenswrapper[4660]: I1129 07:41:22.283990 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerStarted","Data":"fcecf623b86f07fb46341a4651dcb01c4ccf5cb04d8a7b01d90c6d53652c9589"} Nov 29 07:41:23 crc kubenswrapper[4660]: I1129 07:41:23.302261 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerID="c701f400fbe75878b50eea8da66c082f9b38f6e8c8cb63aa86260636bdd29941" exitCode=0 Nov 29 07:41:23 crc kubenswrapper[4660]: I1129 07:41:23.302283 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0f0c79bc-487c-4206-8a0d-1b14d7081e28","Type":"ContainerDied","Data":"c701f400fbe75878b50eea8da66c082f9b38f6e8c8cb63aa86260636bdd29941"} Nov 29 07:41:23 crc kubenswrapper[4660]: I1129 07:41:23.957151 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.042153 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.043027 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-scripts\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.043152 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-combined-ca-bundle\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.043241 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-httpd-run\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.043318 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-logs\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.043390 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-config-data\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.043571 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp8xx\" (UniqueName: \"kubernetes.io/projected/0f0c79bc-487c-4206-8a0d-1b14d7081e28-kube-api-access-tp8xx\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.043738 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-public-tls-certs\") pod \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\" (UID: \"0f0c79bc-487c-4206-8a0d-1b14d7081e28\") " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.068886 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.069408 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-logs" (OuterVolumeSpecName: "logs") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.069694 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0c79bc-487c-4206-8a0d-1b14d7081e28-kube-api-access-tp8xx" (OuterVolumeSpecName: "kube-api-access-tp8xx") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "kube-api-access-tp8xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.098659 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.118910 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.152234 4660 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.152271 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.152287 4660 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.152298 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f0c79bc-487c-4206-8a0d-1b14d7081e28-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.152315 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp8xx\" (UniqueName: \"kubernetes.io/projected/0f0c79bc-487c-4206-8a0d-1b14d7081e28-kube-api-access-tp8xx\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.165552 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-scripts" (OuterVolumeSpecName: "scripts") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.196568 4660 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.206990 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.208602 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-config-data" (OuterVolumeSpecName: "config-data") pod "0f0c79bc-487c-4206-8a0d-1b14d7081e28" (UID: "0f0c79bc-487c-4206-8a0d-1b14d7081e28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.254105 4660 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.254141 4660 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.254154 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.254184 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0c79bc-487c-4206-8a0d-1b14d7081e28-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.313213 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0f0c79bc-487c-4206-8a0d-1b14d7081e28","Type":"ContainerDied","Data":"dca502b04916f04e594f536bff610b6a7d1f97ae40a6bcdf7a37639bea94fc9a"} Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.313301 4660 scope.go:117] "RemoveContainer" containerID="c701f400fbe75878b50eea8da66c082f9b38f6e8c8cb63aa86260636bdd29941" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.313318 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.365694 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.388682 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.406308 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:41:24 crc kubenswrapper[4660]: E1129 07:41:24.406704 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-log" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.406716 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-log" Nov 29 07:41:24 crc kubenswrapper[4660]: E1129 07:41:24.406745 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-httpd" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.406751 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-httpd" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.406974 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-log" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.406998 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" containerName="glance-httpd" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.428392 4660 scope.go:117] "RemoveContainer" containerID="c8ab630cb1a0fd6ab4d421d9ebe5315e623fe43f3b2b40d3b6438aaec9398975" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.429698 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.436277 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.441984 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.442174 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.562351 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e45b487-ff42-480a-a6a2-803949758e7a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.562918 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.563026 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.563188 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e45b487-ff42-480a-a6a2-803949758e7a-logs\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.563347 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.563570 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.563688 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.563804 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxsw\" (UniqueName: \"kubernetes.io/projected/1e45b487-ff42-480a-a6a2-803949758e7a-kube-api-access-xxxsw\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.665171 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e45b487-ff42-480a-a6a2-803949758e7a-logs\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.665765 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.666383 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.666703 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.667058 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxsw\" (UniqueName: \"kubernetes.io/projected/1e45b487-ff42-480a-a6a2-803949758e7a-kube-api-access-xxxsw\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.667197 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e45b487-ff42-480a-a6a2-803949758e7a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.665712 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e45b487-ff42-480a-a6a2-803949758e7a-logs\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.666583 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.667591 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e45b487-ff42-480a-a6a2-803949758e7a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.667731 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.667776 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.672763 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.674336 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.675075 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.678159 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e45b487-ff42-480a-a6a2-803949758e7a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.689515 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxsw\" (UniqueName: \"kubernetes.io/projected/1e45b487-ff42-480a-a6a2-803949758e7a-kube-api-access-xxxsw\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.707408 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e45b487-ff42-480a-a6a2-803949758e7a\") " pod="openstack/glance-default-external-api-0" Nov 29 07:41:24 crc kubenswrapper[4660]: I1129 07:41:24.773951 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 29 07:41:25 crc kubenswrapper[4660]: I1129 07:41:25.323330 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerStarted","Data":"dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a"} Nov 29 07:41:25 crc kubenswrapper[4660]: I1129 07:41:25.560161 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 29 07:41:25 crc kubenswrapper[4660]: I1129 07:41:25.720133 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f0c79bc-487c-4206-8a0d-1b14d7081e28" path="/var/lib/kubelet/pods/0f0c79bc-487c-4206-8a0d-1b14d7081e28/volumes" Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.160955 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.350519 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e45b487-ff42-480a-a6a2-803949758e7a","Type":"ContainerStarted","Data":"54b4ad961234b8bffe7980c9e5e39c47ddc52b139c86319b394692220091c541"} Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.350861 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e45b487-ff42-480a-a6a2-803949758e7a","Type":"ContainerStarted","Data":"4936ebdf32aab0ff7cedbc2465aa3b191cd95e6f351fc38222ce6a1b5d13d74c"} Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.354829 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerStarted","Data":"8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775"} Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.808817 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.809956 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.877631 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:26 crc kubenswrapper[4660]: I1129 07:41:26.886099 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:27 crc kubenswrapper[4660]: I1129 07:41:27.372760 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e45b487-ff42-480a-a6a2-803949758e7a","Type":"ContainerStarted","Data":"36b4a8c263da28b9ac0e512d2dd8ec09919542653e6fc2082fee3e6d8e7cda00"} Nov 29 07:41:27 crc kubenswrapper[4660]: I1129 07:41:27.383678 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerStarted","Data":"8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744"} Nov 29 07:41:27 crc kubenswrapper[4660]: I1129 07:41:27.383941 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:27 crc kubenswrapper[4660]: I1129 07:41:27.384259 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:27 crc kubenswrapper[4660]: I1129 07:41:27.405130 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.405112776 podStartE2EDuration="3.405112776s" podCreationTimestamp="2025-11-29 07:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:27.398154395 +0000 UTC m=+1577.951684294" watchObservedRunningTime="2025-11-29 07:41:27.405112776 +0000 UTC m=+1577.958642675" Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.418103 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.418423 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.419296 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-central-agent" containerID="cri-o://dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a" gracePeriod=30 Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.419556 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerStarted","Data":"26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e"} Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.419590 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.419820 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="proxy-httpd" containerID="cri-o://26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e" gracePeriod=30 Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.419863 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="sg-core" containerID="cri-o://8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744" gracePeriod=30 Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.419893 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-notification-agent" containerID="cri-o://8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775" gracePeriod=30 Nov 29 07:41:29 crc kubenswrapper[4660]: I1129 07:41:29.451862 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.611340599 podStartE2EDuration="9.451839453s" podCreationTimestamp="2025-11-29 07:41:20 +0000 UTC" firstStartedPulling="2025-11-29 07:41:21.660750684 +0000 UTC m=+1572.214280583" lastFinishedPulling="2025-11-29 07:41:28.501249538 +0000 UTC m=+1579.054779437" observedRunningTime="2025-11-29 07:41:29.446320161 +0000 UTC m=+1579.999850070" watchObservedRunningTime="2025-11-29 07:41:29.451839453 +0000 UTC m=+1580.005369352" Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.015445 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.054122 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.427925 4660 generic.go:334] "Generic (PLEG): container finished" podID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerID="26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e" exitCode=0 Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.427957 4660 generic.go:334] "Generic (PLEG): container finished" podID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerID="8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744" exitCode=2 Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.427965 4660 generic.go:334] "Generic (PLEG): container finished" podID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerID="8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775" exitCode=0 Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.428015 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerDied","Data":"26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e"} Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.428057 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerDied","Data":"8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744"} Nov 29 07:41:30 crc kubenswrapper[4660]: I1129 07:41:30.428074 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerDied","Data":"8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775"} Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.216747 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.295546 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v56pj\" (UniqueName: \"kubernetes.io/projected/277aaa4d-9633-4735-a6a5-b08a968b69e1-kube-api-access-v56pj\") pod \"277aaa4d-9633-4735-a6a5-b08a968b69e1\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.295895 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-ovndb-tls-certs\") pod \"277aaa4d-9633-4735-a6a5-b08a968b69e1\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.295998 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-httpd-config\") pod \"277aaa4d-9633-4735-a6a5-b08a968b69e1\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.296141 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-config\") pod \"277aaa4d-9633-4735-a6a5-b08a968b69e1\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.296300 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-combined-ca-bundle\") pod \"277aaa4d-9633-4735-a6a5-b08a968b69e1\" (UID: \"277aaa4d-9633-4735-a6a5-b08a968b69e1\") " Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.319584 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277aaa4d-9633-4735-a6a5-b08a968b69e1-kube-api-access-v56pj" (OuterVolumeSpecName: "kube-api-access-v56pj") pod "277aaa4d-9633-4735-a6a5-b08a968b69e1" (UID: "277aaa4d-9633-4735-a6a5-b08a968b69e1"). InnerVolumeSpecName "kube-api-access-v56pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.324056 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "277aaa4d-9633-4735-a6a5-b08a968b69e1" (UID: "277aaa4d-9633-4735-a6a5-b08a968b69e1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.398780 4660 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.398969 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v56pj\" (UniqueName: \"kubernetes.io/projected/277aaa4d-9633-4735-a6a5-b08a968b69e1-kube-api-access-v56pj\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.413857 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "277aaa4d-9633-4735-a6a5-b08a968b69e1" (UID: "277aaa4d-9633-4735-a6a5-b08a968b69e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.426773 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8f42b"] Nov 29 07:41:31 crc kubenswrapper[4660]: E1129 07:41:31.427324 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-api" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.427345 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-api" Nov 29 07:41:31 crc kubenswrapper[4660]: E1129 07:41:31.427363 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-httpd" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.427371 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-httpd" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.427588 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-api" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.427637 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerName="neutron-httpd" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.436163 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.464322 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8f42b"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.464655 4660 generic.go:334] "Generic (PLEG): container finished" podID="277aaa4d-9633-4735-a6a5-b08a968b69e1" containerID="11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec" exitCode=0 Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.466020 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c46598bd8-gq9r5" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.466092 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c46598bd8-gq9r5" event={"ID":"277aaa4d-9633-4735-a6a5-b08a968b69e1","Type":"ContainerDied","Data":"11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec"} Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.466154 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c46598bd8-gq9r5" event={"ID":"277aaa4d-9633-4735-a6a5-b08a968b69e1","Type":"ContainerDied","Data":"1703acf144b31ddf3d8172ac442c5708aab0a9be20d063b29b003262d1e63555"} Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.466175 4660 scope.go:117] "RemoveContainer" containerID="480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.495393 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "277aaa4d-9633-4735-a6a5-b08a968b69e1" (UID: "277aaa4d-9633-4735-a6a5-b08a968b69e1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.501420 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.501718 4660 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.510019 4660 scope.go:117] "RemoveContainer" containerID="11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.512274 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2ffsl"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.513588 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.514313 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-config" (OuterVolumeSpecName: "config") pod "277aaa4d-9633-4735-a6a5-b08a968b69e1" (UID: "277aaa4d-9633-4735-a6a5-b08a968b69e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.536006 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2ffsl"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.538170 4660 scope.go:117] "RemoveContainer" containerID="480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d" Nov 29 07:41:31 crc kubenswrapper[4660]: E1129 07:41:31.539364 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d\": container with ID starting with 480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d not found: ID does not exist" containerID="480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.539442 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d"} err="failed to get container status \"480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d\": rpc error: code = NotFound desc = could not find container \"480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d\": container with ID starting with 480d01821c1478a6e61a9c361868977b705aa0ebb777e1beaa55d3adf44a330d not found: ID does not exist" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.539492 4660 scope.go:117] "RemoveContainer" containerID="11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec" Nov 29 07:41:31 crc kubenswrapper[4660]: E1129 07:41:31.540768 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec\": container with ID starting with 11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec not found: ID does not exist" containerID="11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.540911 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec"} err="failed to get container status \"11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec\": rpc error: code = NotFound desc = could not find container \"11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec\": container with ID starting with 11e77a8aadf7f3bc0af6bb34895e09ccb68de466e49e54446ebc62fa11fe6aec not found: ID does not exist" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.603475 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf386e95-5335-4ed0-b1a5-10744c63370e-operator-scripts\") pod \"nova-api-db-create-8f42b\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.603539 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wxs\" (UniqueName: \"kubernetes.io/projected/cf386e95-5335-4ed0-b1a5-10744c63370e-kube-api-access-q4wxs\") pod \"nova-api-db-create-8f42b\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.603558 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e098096-b124-4742-87ed-2358975493a2-operator-scripts\") pod \"nova-cell0-db-create-2ffsl\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.603616 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffb8g\" (UniqueName: \"kubernetes.io/projected/9e098096-b124-4742-87ed-2358975493a2-kube-api-access-ffb8g\") pod \"nova-cell0-db-create-2ffsl\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.603666 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/277aaa4d-9633-4735-a6a5-b08a968b69e1-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.607567 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6mpvs"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.609333 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.621741 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6mpvs"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.631719 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e8ff-account-create-update-mqxsz"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.632922 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.635668 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.651553 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e8ff-account-create-update-mqxsz"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.705309 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnn5c\" (UniqueName: \"kubernetes.io/projected/c8d5204a-1c54-41f1-861f-2812a11a6f37-kube-api-access-jnn5c\") pod \"nova-api-e8ff-account-create-update-mqxsz\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.705874 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf386e95-5335-4ed0-b1a5-10744c63370e-operator-scripts\") pod \"nova-api-db-create-8f42b\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.705991 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4wxs\" (UniqueName: \"kubernetes.io/projected/cf386e95-5335-4ed0-b1a5-10744c63370e-kube-api-access-q4wxs\") pod \"nova-api-db-create-8f42b\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.706108 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e098096-b124-4742-87ed-2358975493a2-operator-scripts\") pod \"nova-cell0-db-create-2ffsl\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.706256 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndd7\" (UniqueName: \"kubernetes.io/projected/183b6811-11ff-49be-b2b0-d415851242fa-kube-api-access-lndd7\") pod \"nova-cell1-db-create-6mpvs\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.706388 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffb8g\" (UniqueName: \"kubernetes.io/projected/9e098096-b124-4742-87ed-2358975493a2-kube-api-access-ffb8g\") pod \"nova-cell0-db-create-2ffsl\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.706700 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5204a-1c54-41f1-861f-2812a11a6f37-operator-scripts\") pod \"nova-api-e8ff-account-create-update-mqxsz\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.706936 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183b6811-11ff-49be-b2b0-d415851242fa-operator-scripts\") pod \"nova-cell1-db-create-6mpvs\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.707896 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf386e95-5335-4ed0-b1a5-10744c63370e-operator-scripts\") pod \"nova-api-db-create-8f42b\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.709043 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e098096-b124-4742-87ed-2358975493a2-operator-scripts\") pod \"nova-cell0-db-create-2ffsl\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.728016 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4wxs\" (UniqueName: \"kubernetes.io/projected/cf386e95-5335-4ed0-b1a5-10744c63370e-kube-api-access-q4wxs\") pod \"nova-api-db-create-8f42b\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.731464 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffb8g\" (UniqueName: \"kubernetes.io/projected/9e098096-b124-4742-87ed-2358975493a2-kube-api-access-ffb8g\") pod \"nova-cell0-db-create-2ffsl\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.777978 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.809119 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnn5c\" (UniqueName: \"kubernetes.io/projected/c8d5204a-1c54-41f1-861f-2812a11a6f37-kube-api-access-jnn5c\") pod \"nova-api-e8ff-account-create-update-mqxsz\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.809300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lndd7\" (UniqueName: \"kubernetes.io/projected/183b6811-11ff-49be-b2b0-d415851242fa-kube-api-access-lndd7\") pod \"nova-cell1-db-create-6mpvs\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.809366 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5204a-1c54-41f1-861f-2812a11a6f37-operator-scripts\") pod \"nova-api-e8ff-account-create-update-mqxsz\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.809388 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183b6811-11ff-49be-b2b0-d415851242fa-operator-scripts\") pod \"nova-cell1-db-create-6mpvs\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.810094 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183b6811-11ff-49be-b2b0-d415851242fa-operator-scripts\") pod \"nova-cell1-db-create-6mpvs\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.810143 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5204a-1c54-41f1-861f-2812a11a6f37-operator-scripts\") pod \"nova-api-e8ff-account-create-update-mqxsz\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.833269 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.839569 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5fe1-account-create-update-kqp9b"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.841543 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.853861 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnn5c\" (UniqueName: \"kubernetes.io/projected/c8d5204a-1c54-41f1-861f-2812a11a6f37-kube-api-access-jnn5c\") pod \"nova-api-e8ff-account-create-update-mqxsz\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.854127 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5fe1-account-create-update-kqp9b"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.855812 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lndd7\" (UniqueName: \"kubernetes.io/projected/183b6811-11ff-49be-b2b0-d415851242fa-kube-api-access-lndd7\") pod \"nova-cell1-db-create-6mpvs\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.855876 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.879066 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c46598bd8-gq9r5"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.896771 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c46598bd8-gq9r5"] Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.910636 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7nt\" (UniqueName: \"kubernetes.io/projected/91d2c764-f7a5-4f5e-92e2-a4031e313876-kube-api-access-xw7nt\") pod \"nova-cell0-5fe1-account-create-update-kqp9b\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.911254 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d2c764-f7a5-4f5e-92e2-a4031e313876-operator-scripts\") pod \"nova-cell0-5fe1-account-create-update-kqp9b\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.925623 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:31 crc kubenswrapper[4660]: I1129 07:41:31.955568 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.014886 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d2c764-f7a5-4f5e-92e2-a4031e313876-operator-scripts\") pod \"nova-cell0-5fe1-account-create-update-kqp9b\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.014956 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7nt\" (UniqueName: \"kubernetes.io/projected/91d2c764-f7a5-4f5e-92e2-a4031e313876-kube-api-access-xw7nt\") pod \"nova-cell0-5fe1-account-create-update-kqp9b\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.016165 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d2c764-f7a5-4f5e-92e2-a4031e313876-operator-scripts\") pod \"nova-cell0-5fe1-account-create-update-kqp9b\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.056763 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f1ad-account-create-update-lvpxs"] Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.058738 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.062326 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.089031 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7nt\" (UniqueName: \"kubernetes.io/projected/91d2c764-f7a5-4f5e-92e2-a4031e313876-kube-api-access-xw7nt\") pod \"nova-cell0-5fe1-account-create-update-kqp9b\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.114984 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f1ad-account-create-update-lvpxs"] Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.116106 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-824f9\" (UniqueName: \"kubernetes.io/projected/433c834f-9c69-4b1e-9849-56fd950dcb70-kube-api-access-824f9\") pod \"nova-cell1-f1ad-account-create-update-lvpxs\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.116245 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433c834f-9c69-4b1e-9849-56fd950dcb70-operator-scripts\") pod \"nova-cell1-f1ad-account-create-update-lvpxs\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.219491 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433c834f-9c69-4b1e-9849-56fd950dcb70-operator-scripts\") pod \"nova-cell1-f1ad-account-create-update-lvpxs\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.219662 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-824f9\" (UniqueName: \"kubernetes.io/projected/433c834f-9c69-4b1e-9849-56fd950dcb70-kube-api-access-824f9\") pod \"nova-cell1-f1ad-account-create-update-lvpxs\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.220746 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433c834f-9c69-4b1e-9849-56fd950dcb70-operator-scripts\") pod \"nova-cell1-f1ad-account-create-update-lvpxs\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.237683 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-824f9\" (UniqueName: \"kubernetes.io/projected/433c834f-9c69-4b1e-9849-56fd950dcb70-kube-api-access-824f9\") pod \"nova-cell1-f1ad-account-create-update-lvpxs\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.255033 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.400601 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.493876 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8f42b"] Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.510518 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2ffsl"] Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.708197 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6mpvs"] Nov 29 07:41:32 crc kubenswrapper[4660]: W1129 07:41:32.737997 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod183b6811_11ff_49be_b2b0_d415851242fa.slice/crio-3b936228005d7ee5f658cc29bfb42d3ecb9025f65c3c63faadbaedf618d208ea WatchSource:0}: Error finding container 3b936228005d7ee5f658cc29bfb42d3ecb9025f65c3c63faadbaedf618d208ea: Status 404 returned error can't find the container with id 3b936228005d7ee5f658cc29bfb42d3ecb9025f65c3c63faadbaedf618d208ea Nov 29 07:41:32 crc kubenswrapper[4660]: I1129 07:41:32.912309 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e8ff-account-create-update-mqxsz"] Nov 29 07:41:32 crc kubenswrapper[4660]: W1129 07:41:32.917884 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8d5204a_1c54_41f1_861f_2812a11a6f37.slice/crio-c11ea846d1f71bc125112dee75a87b41eda6aa5df684cd559de650371ef1e43a WatchSource:0}: Error finding container c11ea846d1f71bc125112dee75a87b41eda6aa5df684cd559de650371ef1e43a: Status 404 returned error can't find the container with id c11ea846d1f71bc125112dee75a87b41eda6aa5df684cd559de650371ef1e43a Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.008981 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5fe1-account-create-update-kqp9b"] Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.025791 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f1ad-account-create-update-lvpxs"] Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.492940 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6mpvs" event={"ID":"183b6811-11ff-49be-b2b0-d415851242fa","Type":"ContainerStarted","Data":"3b936228005d7ee5f658cc29bfb42d3ecb9025f65c3c63faadbaedf618d208ea"} Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.494287 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" event={"ID":"c8d5204a-1c54-41f1-861f-2812a11a6f37","Type":"ContainerStarted","Data":"c11ea846d1f71bc125112dee75a87b41eda6aa5df684cd559de650371ef1e43a"} Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.496542 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8f42b" event={"ID":"cf386e95-5335-4ed0-b1a5-10744c63370e","Type":"ContainerStarted","Data":"3f287e2a8bd421245d50abc45184a0223d3d1af0568d26d3da63444bcffe0406"} Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.499804 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" event={"ID":"91d2c764-f7a5-4f5e-92e2-a4031e313876","Type":"ContainerStarted","Data":"2e3ce3c90914191e4d673ef9c6bd8cd838b5a2c018b61b72a6c9dff5f59c3a17"} Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.501490 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" event={"ID":"433c834f-9c69-4b1e-9849-56fd950dcb70","Type":"ContainerStarted","Data":"5551465757cb0e31c1b1c892f740441bc756d7fb5ed4e1d22aacd7d4b990a895"} Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.503663 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2ffsl" event={"ID":"9e098096-b124-4742-87ed-2358975493a2","Type":"ContainerStarted","Data":"598d2849e6b30ae35e28cfac92a6e444d6527a770abd185cd0d6441732a9c32a"} Nov 29 07:41:33 crc kubenswrapper[4660]: I1129 07:41:33.708859 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277aaa4d-9633-4735-a6a5-b08a968b69e1" path="/var/lib/kubelet/pods/277aaa4d-9633-4735-a6a5-b08a968b69e1/volumes" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.512918 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6mpvs" event={"ID":"183b6811-11ff-49be-b2b0-d415851242fa","Type":"ContainerStarted","Data":"f6b2788f5e4eff53ca5c674e275152232395a1885b7c743e60dfbf595b991818"} Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.518935 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" event={"ID":"c8d5204a-1c54-41f1-861f-2812a11a6f37","Type":"ContainerStarted","Data":"b2bf92e7f6b9b9474dc20633adbf0e62bfd1166c8af77917d5721be32fdbc2cc"} Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.529361 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8f42b" event={"ID":"cf386e95-5335-4ed0-b1a5-10744c63370e","Type":"ContainerStarted","Data":"3fb3ae80807a25f4dfcc84da3443e60984567682fb8935d3f22524a15514fed1"} Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.541377 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" event={"ID":"91d2c764-f7a5-4f5e-92e2-a4031e313876","Type":"ContainerStarted","Data":"cab8d4590880d39596434d0b2f1416d9546adbc78022032acb0ef4a6dd802956"} Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.545259 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" event={"ID":"433c834f-9c69-4b1e-9849-56fd950dcb70","Type":"ContainerStarted","Data":"4e6a4ab47e39635438a8c8b7d6982d2f3bc03a81b745b422c0404a0b5d95a523"} Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.547093 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2ffsl" event={"ID":"9e098096-b124-4742-87ed-2358975493a2","Type":"ContainerStarted","Data":"15aea278b1150e472d934a005eca50d6d9cf57d5a2c4b77f3991ef62d56caaec"} Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.592187 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-6mpvs" podStartSLOduration=3.592155549 podStartE2EDuration="3.592155549s" podCreationTimestamp="2025-11-29 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:34.53404423 +0000 UTC m=+1585.087574139" watchObservedRunningTime="2025-11-29 07:41:34.592155549 +0000 UTC m=+1585.145685448" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.608113 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-8f42b" podStartSLOduration=3.6080912080000003 podStartE2EDuration="3.608091208s" podCreationTimestamp="2025-11-29 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:34.55983986 +0000 UTC m=+1585.113369759" watchObservedRunningTime="2025-11-29 07:41:34.608091208 +0000 UTC m=+1585.161621107" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.624045 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" podStartSLOduration=3.624025867 podStartE2EDuration="3.624025867s" podCreationTimestamp="2025-11-29 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:34.578454373 +0000 UTC m=+1585.131984272" watchObservedRunningTime="2025-11-29 07:41:34.624025867 +0000 UTC m=+1585.177555766" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.630714 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" podStartSLOduration=3.63067075 podStartE2EDuration="3.63067075s" podCreationTimestamp="2025-11-29 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:34.598068303 +0000 UTC m=+1585.151598202" watchObservedRunningTime="2025-11-29 07:41:34.63067075 +0000 UTC m=+1585.184200649" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.663360 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" podStartSLOduration=3.663334628 podStartE2EDuration="3.663334628s" podCreationTimestamp="2025-11-29 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:34.619133282 +0000 UTC m=+1585.172663181" watchObservedRunningTime="2025-11-29 07:41:34.663334628 +0000 UTC m=+1585.216864527" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.673712 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-2ffsl" podStartSLOduration=3.673687554 podStartE2EDuration="3.673687554s" podCreationTimestamp="2025-11-29 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:41:34.639488302 +0000 UTC m=+1585.193018201" watchObservedRunningTime="2025-11-29 07:41:34.673687554 +0000 UTC m=+1585.227217453" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.775088 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.775142 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.827479 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:41:34 crc kubenswrapper[4660]: I1129 07:41:34.827949 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.500029 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.500393 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.560521 4660 generic.go:334] "Generic (PLEG): container finished" podID="c8d5204a-1c54-41f1-861f-2812a11a6f37" containerID="b2bf92e7f6b9b9474dc20633adbf0e62bfd1166c8af77917d5721be32fdbc2cc" exitCode=0 Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.560601 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" event={"ID":"c8d5204a-1c54-41f1-861f-2812a11a6f37","Type":"ContainerDied","Data":"b2bf92e7f6b9b9474dc20633adbf0e62bfd1166c8af77917d5721be32fdbc2cc"} Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.564777 4660 generic.go:334] "Generic (PLEG): container finished" podID="cf386e95-5335-4ed0-b1a5-10744c63370e" containerID="3fb3ae80807a25f4dfcc84da3443e60984567682fb8935d3f22524a15514fed1" exitCode=0 Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.565002 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8f42b" event={"ID":"cf386e95-5335-4ed0-b1a5-10744c63370e","Type":"ContainerDied","Data":"3fb3ae80807a25f4dfcc84da3443e60984567682fb8935d3f22524a15514fed1"} Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.567595 4660 generic.go:334] "Generic (PLEG): container finished" podID="91d2c764-f7a5-4f5e-92e2-a4031e313876" containerID="cab8d4590880d39596434d0b2f1416d9546adbc78022032acb0ef4a6dd802956" exitCode=0 Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.567709 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" event={"ID":"91d2c764-f7a5-4f5e-92e2-a4031e313876","Type":"ContainerDied","Data":"cab8d4590880d39596434d0b2f1416d9546adbc78022032acb0ef4a6dd802956"} Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.572421 4660 generic.go:334] "Generic (PLEG): container finished" podID="433c834f-9c69-4b1e-9849-56fd950dcb70" containerID="4e6a4ab47e39635438a8c8b7d6982d2f3bc03a81b745b422c0404a0b5d95a523" exitCode=0 Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.572503 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" event={"ID":"433c834f-9c69-4b1e-9849-56fd950dcb70","Type":"ContainerDied","Data":"4e6a4ab47e39635438a8c8b7d6982d2f3bc03a81b745b422c0404a0b5d95a523"} Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.574692 4660 generic.go:334] "Generic (PLEG): container finished" podID="9e098096-b124-4742-87ed-2358975493a2" containerID="15aea278b1150e472d934a005eca50d6d9cf57d5a2c4b77f3991ef62d56caaec" exitCode=0 Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.574771 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2ffsl" event={"ID":"9e098096-b124-4742-87ed-2358975493a2","Type":"ContainerDied","Data":"15aea278b1150e472d934a005eca50d6d9cf57d5a2c4b77f3991ef62d56caaec"} Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.576700 4660 generic.go:334] "Generic (PLEG): container finished" podID="183b6811-11ff-49be-b2b0-d415851242fa" containerID="f6b2788f5e4eff53ca5c674e275152232395a1885b7c743e60dfbf595b991818" exitCode=0 Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.576790 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6mpvs" event={"ID":"183b6811-11ff-49be-b2b0-d415851242fa","Type":"ContainerDied","Data":"f6b2788f5e4eff53ca5c674e275152232395a1885b7c743e60dfbf595b991818"} Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.576860 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:41:35 crc kubenswrapper[4660]: I1129 07:41:35.576878 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.107444 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.137730 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183b6811-11ff-49be-b2b0-d415851242fa-operator-scripts\") pod \"183b6811-11ff-49be-b2b0-d415851242fa\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.138136 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lndd7\" (UniqueName: \"kubernetes.io/projected/183b6811-11ff-49be-b2b0-d415851242fa-kube-api-access-lndd7\") pod \"183b6811-11ff-49be-b2b0-d415851242fa\" (UID: \"183b6811-11ff-49be-b2b0-d415851242fa\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.139915 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183b6811-11ff-49be-b2b0-d415851242fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "183b6811-11ff-49be-b2b0-d415851242fa" (UID: "183b6811-11ff-49be-b2b0-d415851242fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.145517 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183b6811-11ff-49be-b2b0-d415851242fa-kube-api-access-lndd7" (OuterVolumeSpecName: "kube-api-access-lndd7") pod "183b6811-11ff-49be-b2b0-d415851242fa" (UID: "183b6811-11ff-49be-b2b0-d415851242fa"). InnerVolumeSpecName "kube-api-access-lndd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.240564 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lndd7\" (UniqueName: \"kubernetes.io/projected/183b6811-11ff-49be-b2b0-d415851242fa-kube-api-access-lndd7\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.240601 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/183b6811-11ff-49be-b2b0-d415851242fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.375804 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.381933 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.408670 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.409588 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.419449 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.552877 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw7nt\" (UniqueName: \"kubernetes.io/projected/91d2c764-f7a5-4f5e-92e2-a4031e313876-kube-api-access-xw7nt\") pod \"91d2c764-f7a5-4f5e-92e2-a4031e313876\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.552941 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffb8g\" (UniqueName: \"kubernetes.io/projected/9e098096-b124-4742-87ed-2358975493a2-kube-api-access-ffb8g\") pod \"9e098096-b124-4742-87ed-2358975493a2\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553006 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4wxs\" (UniqueName: \"kubernetes.io/projected/cf386e95-5335-4ed0-b1a5-10744c63370e-kube-api-access-q4wxs\") pod \"cf386e95-5335-4ed0-b1a5-10744c63370e\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553044 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5204a-1c54-41f1-861f-2812a11a6f37-operator-scripts\") pod \"c8d5204a-1c54-41f1-861f-2812a11a6f37\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553079 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-824f9\" (UniqueName: \"kubernetes.io/projected/433c834f-9c69-4b1e-9849-56fd950dcb70-kube-api-access-824f9\") pod \"433c834f-9c69-4b1e-9849-56fd950dcb70\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553109 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433c834f-9c69-4b1e-9849-56fd950dcb70-operator-scripts\") pod \"433c834f-9c69-4b1e-9849-56fd950dcb70\" (UID: \"433c834f-9c69-4b1e-9849-56fd950dcb70\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553169 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnn5c\" (UniqueName: \"kubernetes.io/projected/c8d5204a-1c54-41f1-861f-2812a11a6f37-kube-api-access-jnn5c\") pod \"c8d5204a-1c54-41f1-861f-2812a11a6f37\" (UID: \"c8d5204a-1c54-41f1-861f-2812a11a6f37\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553190 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d2c764-f7a5-4f5e-92e2-a4031e313876-operator-scripts\") pod \"91d2c764-f7a5-4f5e-92e2-a4031e313876\" (UID: \"91d2c764-f7a5-4f5e-92e2-a4031e313876\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553250 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf386e95-5335-4ed0-b1a5-10744c63370e-operator-scripts\") pod \"cf386e95-5335-4ed0-b1a5-10744c63370e\" (UID: \"cf386e95-5335-4ed0-b1a5-10744c63370e\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.553271 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e098096-b124-4742-87ed-2358975493a2-operator-scripts\") pod \"9e098096-b124-4742-87ed-2358975493a2\" (UID: \"9e098096-b124-4742-87ed-2358975493a2\") " Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.554103 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e098096-b124-4742-87ed-2358975493a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e098096-b124-4742-87ed-2358975493a2" (UID: "9e098096-b124-4742-87ed-2358975493a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.555196 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/433c834f-9c69-4b1e-9849-56fd950dcb70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "433c834f-9c69-4b1e-9849-56fd950dcb70" (UID: "433c834f-9c69-4b1e-9849-56fd950dcb70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.556820 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d2c764-f7a5-4f5e-92e2-a4031e313876-kube-api-access-xw7nt" (OuterVolumeSpecName: "kube-api-access-xw7nt") pod "91d2c764-f7a5-4f5e-92e2-a4031e313876" (UID: "91d2c764-f7a5-4f5e-92e2-a4031e313876"). InnerVolumeSpecName "kube-api-access-xw7nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.557122 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d5204a-1c54-41f1-861f-2812a11a6f37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8d5204a-1c54-41f1-861f-2812a11a6f37" (UID: "c8d5204a-1c54-41f1-861f-2812a11a6f37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.557226 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf386e95-5335-4ed0-b1a5-10744c63370e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf386e95-5335-4ed0-b1a5-10744c63370e" (UID: "cf386e95-5335-4ed0-b1a5-10744c63370e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.557225 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91d2c764-f7a5-4f5e-92e2-a4031e313876-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91d2c764-f7a5-4f5e-92e2-a4031e313876" (UID: "91d2c764-f7a5-4f5e-92e2-a4031e313876"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.557388 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e098096-b124-4742-87ed-2358975493a2-kube-api-access-ffb8g" (OuterVolumeSpecName: "kube-api-access-ffb8g") pod "9e098096-b124-4742-87ed-2358975493a2" (UID: "9e098096-b124-4742-87ed-2358975493a2"). InnerVolumeSpecName "kube-api-access-ffb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.560096 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf386e95-5335-4ed0-b1a5-10744c63370e-kube-api-access-q4wxs" (OuterVolumeSpecName: "kube-api-access-q4wxs") pod "cf386e95-5335-4ed0-b1a5-10744c63370e" (UID: "cf386e95-5335-4ed0-b1a5-10744c63370e"). InnerVolumeSpecName "kube-api-access-q4wxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.561001 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433c834f-9c69-4b1e-9849-56fd950dcb70-kube-api-access-824f9" (OuterVolumeSpecName: "kube-api-access-824f9") pod "433c834f-9c69-4b1e-9849-56fd950dcb70" (UID: "433c834f-9c69-4b1e-9849-56fd950dcb70"). InnerVolumeSpecName "kube-api-access-824f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.567329 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d5204a-1c54-41f1-861f-2812a11a6f37-kube-api-access-jnn5c" (OuterVolumeSpecName: "kube-api-access-jnn5c") pod "c8d5204a-1c54-41f1-861f-2812a11a6f37" (UID: "c8d5204a-1c54-41f1-861f-2812a11a6f37"). InnerVolumeSpecName "kube-api-access-jnn5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.599033 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8f42b" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.599030 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8f42b" event={"ID":"cf386e95-5335-4ed0-b1a5-10744c63370e","Type":"ContainerDied","Data":"3f287e2a8bd421245d50abc45184a0223d3d1af0568d26d3da63444bcffe0406"} Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.599446 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f287e2a8bd421245d50abc45184a0223d3d1af0568d26d3da63444bcffe0406" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.600957 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" event={"ID":"91d2c764-f7a5-4f5e-92e2-a4031e313876","Type":"ContainerDied","Data":"2e3ce3c90914191e4d673ef9c6bd8cd838b5a2c018b61b72a6c9dff5f59c3a17"} Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.600996 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e3ce3c90914191e4d673ef9c6bd8cd838b5a2c018b61b72a6c9dff5f59c3a17" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.601092 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fe1-account-create-update-kqp9b" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.603020 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" event={"ID":"433c834f-9c69-4b1e-9849-56fd950dcb70","Type":"ContainerDied","Data":"5551465757cb0e31c1b1c892f740441bc756d7fb5ed4e1d22aacd7d4b990a895"} Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.603053 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5551465757cb0e31c1b1c892f740441bc756d7fb5ed4e1d22aacd7d4b990a895" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.603108 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1ad-account-create-update-lvpxs" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.604802 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2ffsl" event={"ID":"9e098096-b124-4742-87ed-2358975493a2","Type":"ContainerDied","Data":"598d2849e6b30ae35e28cfac92a6e444d6527a770abd185cd0d6441732a9c32a"} Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.604826 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="598d2849e6b30ae35e28cfac92a6e444d6527a770abd185cd0d6441732a9c32a" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.604895 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2ffsl" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.626283 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6mpvs" event={"ID":"183b6811-11ff-49be-b2b0-d415851242fa","Type":"ContainerDied","Data":"3b936228005d7ee5f658cc29bfb42d3ecb9025f65c3c63faadbaedf618d208ea"} Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.626320 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b936228005d7ee5f658cc29bfb42d3ecb9025f65c3c63faadbaedf618d208ea" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.626454 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6mpvs" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.644752 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" event={"ID":"c8d5204a-1c54-41f1-861f-2812a11a6f37","Type":"ContainerDied","Data":"c11ea846d1f71bc125112dee75a87b41eda6aa5df684cd559de650371ef1e43a"} Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.644814 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11ea846d1f71bc125112dee75a87b41eda6aa5df684cd559de650371ef1e43a" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.644770 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8ff-account-create-update-mqxsz" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655796 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnn5c\" (UniqueName: \"kubernetes.io/projected/c8d5204a-1c54-41f1-861f-2812a11a6f37-kube-api-access-jnn5c\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655837 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91d2c764-f7a5-4f5e-92e2-a4031e313876-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655851 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf386e95-5335-4ed0-b1a5-10744c63370e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655861 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e098096-b124-4742-87ed-2358975493a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655872 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw7nt\" (UniqueName: \"kubernetes.io/projected/91d2c764-f7a5-4f5e-92e2-a4031e313876-kube-api-access-xw7nt\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655881 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffb8g\" (UniqueName: \"kubernetes.io/projected/9e098096-b124-4742-87ed-2358975493a2-kube-api-access-ffb8g\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655892 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4wxs\" (UniqueName: \"kubernetes.io/projected/cf386e95-5335-4ed0-b1a5-10744c63370e-kube-api-access-q4wxs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655902 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d5204a-1c54-41f1-861f-2812a11a6f37-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655912 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-824f9\" (UniqueName: \"kubernetes.io/projected/433c834f-9c69-4b1e-9849-56fd950dcb70-kube-api-access-824f9\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:37 crc kubenswrapper[4660]: I1129 07:41:37.655923 4660 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433c834f-9c69-4b1e-9849-56fd950dcb70-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:38 crc kubenswrapper[4660]: I1129 07:41:38.188545 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:41:38 crc kubenswrapper[4660]: I1129 07:41:38.188755 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 29 07:41:38 crc kubenswrapper[4660]: I1129 07:41:38.196545 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.620569 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.664430 4660 generic.go:334] "Generic (PLEG): container finished" podID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerID="dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a" exitCode=0 Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.664482 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerDied","Data":"dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a"} Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.664517 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d","Type":"ContainerDied","Data":"fcecf623b86f07fb46341a4651dcb01c4ccf5cb04d8a7b01d90c6d53652c9589"} Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.664543 4660 scope.go:117] "RemoveContainer" containerID="26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.664746 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.693869 4660 scope.go:117] "RemoveContainer" containerID="8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695429 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-ceilometer-tls-certs\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695519 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5zsc\" (UniqueName: \"kubernetes.io/projected/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-kube-api-access-w5zsc\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695597 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-log-httpd\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695720 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-config-data\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695757 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-sg-core-conf-yaml\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695800 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-run-httpd\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695832 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-scripts\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.695860 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-combined-ca-bundle\") pod \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\" (UID: \"d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d\") " Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.696109 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.696257 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.696724 4660 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.696741 4660 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.706061 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-kube-api-access-w5zsc" (OuterVolumeSpecName: "kube-api-access-w5zsc") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "kube-api-access-w5zsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.709233 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-scripts" (OuterVolumeSpecName: "scripts") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.774139 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.804105 4660 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.804136 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.804147 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5zsc\" (UniqueName: \"kubernetes.io/projected/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-kube-api-access-w5zsc\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.813091 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.825231 4660 scope.go:117] "RemoveContainer" containerID="8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.858856 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.871753 4660 scope.go:117] "RemoveContainer" containerID="dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.901100 4660 scope.go:117] "RemoveContainer" containerID="26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e" Nov 29 07:41:39 crc kubenswrapper[4660]: E1129 07:41:39.901722 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e\": container with ID starting with 26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e not found: ID does not exist" containerID="26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.901774 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e"} err="failed to get container status \"26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e\": rpc error: code = NotFound desc = could not find container \"26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e\": container with ID starting with 26f67c7f2ef247fa8f51a03e3a7249fbce021d721470c8f7ec5a7b33c3a6904e not found: ID does not exist" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.901807 4660 scope.go:117] "RemoveContainer" containerID="8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744" Nov 29 07:41:39 crc kubenswrapper[4660]: E1129 07:41:39.905743 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744\": container with ID starting with 8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744 not found: ID does not exist" containerID="8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.905801 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744"} err="failed to get container status \"8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744\": rpc error: code = NotFound desc = could not find container \"8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744\": container with ID starting with 8f23a142dc0ed75d227566dcb3bd71088913cfa92ffd0adc735bda4862ff4744 not found: ID does not exist" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.905827 4660 scope.go:117] "RemoveContainer" containerID="8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.906785 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.906807 4660 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:39 crc kubenswrapper[4660]: E1129 07:41:39.908856 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775\": container with ID starting with 8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775 not found: ID does not exist" containerID="8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.908919 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775"} err="failed to get container status \"8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775\": rpc error: code = NotFound desc = could not find container \"8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775\": container with ID starting with 8817b050bbcbef34ae871bef9866869c9af5dc00923157021d9aabdf9dff5775 not found: ID does not exist" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.908957 4660 scope.go:117] "RemoveContainer" containerID="dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a" Nov 29 07:41:39 crc kubenswrapper[4660]: E1129 07:41:39.909411 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a\": container with ID starting with dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a not found: ID does not exist" containerID="dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.909443 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a"} err="failed to get container status \"dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a\": rpc error: code = NotFound desc = could not find container \"dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a\": container with ID starting with dccb155e4ae632e22910bba8f587fe24d826b82cc7b460c21a7fbd669c2a7e2a not found: ID does not exist" Nov 29 07:41:39 crc kubenswrapper[4660]: I1129 07:41:39.914698 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-config-data" (OuterVolumeSpecName: "config-data") pod "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" (UID: "d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.003422 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.009184 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.017391 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.034849 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035323 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d2c764-f7a5-4f5e-92e2-a4031e313876" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035348 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d2c764-f7a5-4f5e-92e2-a4031e313876" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035370 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d5204a-1c54-41f1-861f-2812a11a6f37" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035381 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d5204a-1c54-41f1-861f-2812a11a6f37" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035399 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-notification-agent" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035406 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-notification-agent" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035418 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="proxy-httpd" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035425 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="proxy-httpd" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035441 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433c834f-9c69-4b1e-9849-56fd950dcb70" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035448 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="433c834f-9c69-4b1e-9849-56fd950dcb70" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035464 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="sg-core" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035473 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="sg-core" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035489 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf386e95-5335-4ed0-b1a5-10744c63370e" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035497 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf386e95-5335-4ed0-b1a5-10744c63370e" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035515 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-central-agent" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035522 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-central-agent" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035541 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183b6811-11ff-49be-b2b0-d415851242fa" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035549 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="183b6811-11ff-49be-b2b0-d415851242fa" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: E1129 07:41:40.035562 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e098096-b124-4742-87ed-2358975493a2" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035570 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e098096-b124-4742-87ed-2358975493a2" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035821 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e098096-b124-4742-87ed-2358975493a2" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035856 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d2c764-f7a5-4f5e-92e2-a4031e313876" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035873 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="sg-core" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035890 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-notification-agent" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035923 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d5204a-1c54-41f1-861f-2812a11a6f37" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035941 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="ceilometer-central-agent" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035958 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf386e95-5335-4ed0-b1a5-10744c63370e" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035976 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="433c834f-9c69-4b1e-9849-56fd950dcb70" containerName="mariadb-account-create-update" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.035990 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="183b6811-11ff-49be-b2b0-d415851242fa" containerName="mariadb-database-create" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.036001 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" containerName="proxy-httpd" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.039551 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.047031 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.047336 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.047340 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.078207 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112095 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112209 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112237 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112262 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdkw\" (UniqueName: \"kubernetes.io/projected/06b19520-8cb4-433c-b0ef-b252d4501bfb-kube-api-access-2rdkw\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112288 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-scripts\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112307 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112360 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.112384 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-config-data\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213357 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213408 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-config-data\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213428 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213498 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213523 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213543 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdkw\" (UniqueName: \"kubernetes.io/projected/06b19520-8cb4-433c-b0ef-b252d4501bfb-kube-api-access-2rdkw\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213569 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-scripts\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.213587 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.214466 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.215100 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.220447 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-config-data\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.220590 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.221207 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-scripts\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.221795 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.222897 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.235294 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdkw\" (UniqueName: \"kubernetes.io/projected/06b19520-8cb4-433c-b0ef-b252d4501bfb-kube-api-access-2rdkw\") pod \"ceilometer-0\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.355201 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:41:40 crc kubenswrapper[4660]: I1129 07:41:40.854633 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:41 crc kubenswrapper[4660]: I1129 07:41:41.684728 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerStarted","Data":"c02a787a7bbcd2192279fafd7967845e8461147ecc404d9d4e51d7a04a12d624"} Nov 29 07:41:41 crc kubenswrapper[4660]: I1129 07:41:41.685070 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerStarted","Data":"2d40cc7d04fc3cc4b9b30b2eeec715987915e3dd919412cebe5123e357ce8e3a"} Nov 29 07:41:41 crc kubenswrapper[4660]: I1129 07:41:41.704575 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d" path="/var/lib/kubelet/pods/d39c227c-ac1b-4783-bb46-bc1f3a8c3a2d/volumes" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.310805 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kllk5"] Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.318477 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.321657 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.322542 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.324565 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n7tf5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.340505 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kllk5"] Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.383039 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-config-data\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.383084 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.383205 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvptb\" (UniqueName: \"kubernetes.io/projected/8d711e60-e860-4ba2-aa3c-a8219218cd8e-kube-api-access-rvptb\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.383256 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-scripts\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.484796 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvptb\" (UniqueName: \"kubernetes.io/projected/8d711e60-e860-4ba2-aa3c-a8219218cd8e-kube-api-access-rvptb\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.484873 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-scripts\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.484941 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-config-data\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.484967 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.494866 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-scripts\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.495896 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.497734 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-config-data\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.516141 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvptb\" (UniqueName: \"kubernetes.io/projected/8d711e60-e860-4ba2-aa3c-a8219218cd8e-kube-api-access-rvptb\") pod \"nova-cell0-conductor-db-sync-kllk5\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:42 crc kubenswrapper[4660]: I1129 07:41:42.643125 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:41:43 crc kubenswrapper[4660]: I1129 07:41:43.403523 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kllk5"] Nov 29 07:41:43 crc kubenswrapper[4660]: I1129 07:41:43.763866 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kllk5" event={"ID":"8d711e60-e860-4ba2-aa3c-a8219218cd8e","Type":"ContainerStarted","Data":"f1f4319a2f5086256c5146fa1b22e91768dd4281c92d757a1e3dda43ed62a425"} Nov 29 07:41:43 crc kubenswrapper[4660]: I1129 07:41:43.773386 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerStarted","Data":"34deb3e19ea9685d01d7367cdeaa5ead3e712cdd0c4ca5152e2bd67a8535fec9"} Nov 29 07:41:43 crc kubenswrapper[4660]: I1129 07:41:43.773441 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerStarted","Data":"333d2c6eee8ce4eebddf3543f9a5cc4618c0fa44d87cd340a5e1393fbbc3be48"} Nov 29 07:41:45 crc kubenswrapper[4660]: I1129 07:41:45.797802 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerStarted","Data":"41e3f846e24caa1f36d76a292a2a0fff0ba38e9d411e0200c00b7be8ccde2ec0"} Nov 29 07:41:45 crc kubenswrapper[4660]: I1129 07:41:45.798389 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:41:45 crc kubenswrapper[4660]: I1129 07:41:45.843181 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.28828938 podStartE2EDuration="5.843157932s" podCreationTimestamp="2025-11-29 07:41:40 +0000 UTC" firstStartedPulling="2025-11-29 07:41:40.83487772 +0000 UTC m=+1591.388407609" lastFinishedPulling="2025-11-29 07:41:45.389746262 +0000 UTC m=+1595.943276161" observedRunningTime="2025-11-29 07:41:45.825855556 +0000 UTC m=+1596.379385455" watchObservedRunningTime="2025-11-29 07:41:45.843157932 +0000 UTC m=+1596.396687841" Nov 29 07:41:49 crc kubenswrapper[4660]: I1129 07:41:49.934453 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:41:49 crc kubenswrapper[4660]: I1129 07:41:49.935424 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="proxy-httpd" containerID="cri-o://41e3f846e24caa1f36d76a292a2a0fff0ba38e9d411e0200c00b7be8ccde2ec0" gracePeriod=30 Nov 29 07:41:49 crc kubenswrapper[4660]: I1129 07:41:49.935487 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="sg-core" containerID="cri-o://333d2c6eee8ce4eebddf3543f9a5cc4618c0fa44d87cd340a5e1393fbbc3be48" gracePeriod=30 Nov 29 07:41:49 crc kubenswrapper[4660]: I1129 07:41:49.935521 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-notification-agent" containerID="cri-o://34deb3e19ea9685d01d7367cdeaa5ead3e712cdd0c4ca5152e2bd67a8535fec9" gracePeriod=30 Nov 29 07:41:49 crc kubenswrapper[4660]: I1129 07:41:49.935384 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-central-agent" containerID="cri-o://c02a787a7bbcd2192279fafd7967845e8461147ecc404d9d4e51d7a04a12d624" gracePeriod=30 Nov 29 07:41:50 crc kubenswrapper[4660]: I1129 07:41:50.846715 4660 generic.go:334] "Generic (PLEG): container finished" podID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerID="41e3f846e24caa1f36d76a292a2a0fff0ba38e9d411e0200c00b7be8ccde2ec0" exitCode=0 Nov 29 07:41:50 crc kubenswrapper[4660]: I1129 07:41:50.846764 4660 generic.go:334] "Generic (PLEG): container finished" podID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerID="333d2c6eee8ce4eebddf3543f9a5cc4618c0fa44d87cd340a5e1393fbbc3be48" exitCode=2 Nov 29 07:41:50 crc kubenswrapper[4660]: I1129 07:41:50.846785 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerDied","Data":"41e3f846e24caa1f36d76a292a2a0fff0ba38e9d411e0200c00b7be8ccde2ec0"} Nov 29 07:41:50 crc kubenswrapper[4660]: I1129 07:41:50.846810 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerDied","Data":"333d2c6eee8ce4eebddf3543f9a5cc4618c0fa44d87cd340a5e1393fbbc3be48"} Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.105243 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d94b6"] Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.107378 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.134340 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d94b6"] Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.288100 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5l72\" (UniqueName: \"kubernetes.io/projected/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-kube-api-access-j5l72\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.288194 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-catalog-content\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.288262 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-utilities\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.389710 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-catalog-content\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.389806 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-utilities\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.389916 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5l72\" (UniqueName: \"kubernetes.io/projected/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-kube-api-access-j5l72\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.390511 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-catalog-content\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.390715 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-utilities\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.424081 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5l72\" (UniqueName: \"kubernetes.io/projected/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-kube-api-access-j5l72\") pod \"redhat-marketplace-d94b6\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:53 crc kubenswrapper[4660]: I1129 07:41:53.428439 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:41:55 crc kubenswrapper[4660]: I1129 07:41:55.694864 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-gcx42" podUID="ff906a3b-62c0-4073-afaf-67e927a77020" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:41:55 crc kubenswrapper[4660]: I1129 07:41:55.826048 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-l9mbq" podUID="eacee01a-4708-4371-8721-a6ae49dd8f01" containerName="registry-server" probeResult="failure" output=< Nov 29 07:41:55 crc kubenswrapper[4660]: timeout: health rpc did not complete within 1s Nov 29 07:41:55 crc kubenswrapper[4660]: > Nov 29 07:41:56 crc kubenswrapper[4660]: I1129 07:41:56.917701 4660 generic.go:334] "Generic (PLEG): container finished" podID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerID="34deb3e19ea9685d01d7367cdeaa5ead3e712cdd0c4ca5152e2bd67a8535fec9" exitCode=0 Nov 29 07:41:56 crc kubenswrapper[4660]: I1129 07:41:56.917790 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerDied","Data":"34deb3e19ea9685d01d7367cdeaa5ead3e712cdd0c4ca5152e2bd67a8535fec9"} Nov 29 07:41:59 crc kubenswrapper[4660]: E1129 07:41:59.781775 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Nov 29 07:41:59 crc kubenswrapper[4660]: E1129 07:41:59.782503 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rvptb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-kllk5_openstack(8d711e60-e860-4ba2-aa3c-a8219218cd8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 07:41:59 crc kubenswrapper[4660]: E1129 07:41:59.783652 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-kllk5" podUID="8d711e60-e860-4ba2-aa3c-a8219218cd8e" Nov 29 07:42:00 crc kubenswrapper[4660]: E1129 07:42:00.214017 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-kllk5" podUID="8d711e60-e860-4ba2-aa3c-a8219218cd8e" Nov 29 07:42:00 crc kubenswrapper[4660]: I1129 07:42:00.232359 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d94b6"] Nov 29 07:42:00 crc kubenswrapper[4660]: I1129 07:42:00.959107 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d94b6" event={"ID":"2b78cef7-6b8a-453d-8b2b-7084b5dfd945","Type":"ContainerStarted","Data":"c604de13c9fb92e0809c0c472f3de3013fc13223d73f4fb9bf20c8fa130ea84c"} Nov 29 07:42:03 crc kubenswrapper[4660]: I1129 07:42:03.988787 4660 generic.go:334] "Generic (PLEG): container finished" podID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerID="98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b" exitCode=0 Nov 29 07:42:03 crc kubenswrapper[4660]: I1129 07:42:03.989136 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d94b6" event={"ID":"2b78cef7-6b8a-453d-8b2b-7084b5dfd945","Type":"ContainerDied","Data":"98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b"} Nov 29 07:42:05 crc kubenswrapper[4660]: I1129 07:42:05.001193 4660 generic.go:334] "Generic (PLEG): container finished" podID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerID="c02a787a7bbcd2192279fafd7967845e8461147ecc404d9d4e51d7a04a12d624" exitCode=0 Nov 29 07:42:05 crc kubenswrapper[4660]: I1129 07:42:05.001304 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerDied","Data":"c02a787a7bbcd2192279fafd7967845e8461147ecc404d9d4e51d7a04a12d624"} Nov 29 07:42:05 crc kubenswrapper[4660]: I1129 07:42:05.500580 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:42:05 crc kubenswrapper[4660]: I1129 07:42:05.500655 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:42:05 crc kubenswrapper[4660]: I1129 07:42:05.500714 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:42:05 crc kubenswrapper[4660]: I1129 07:42:05.501461 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:42:05 crc kubenswrapper[4660]: I1129 07:42:05.501533 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" gracePeriod=600 Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.014081 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.014102 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06b19520-8cb4-433c-b0ef-b252d4501bfb","Type":"ContainerDied","Data":"2d40cc7d04fc3cc4b9b30b2eeec715987915e3dd919412cebe5123e357ce8e3a"} Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.014456 4660 scope.go:117] "RemoveContainer" containerID="41e3f846e24caa1f36d76a292a2a0fff0ba38e9d411e0200c00b7be8ccde2ec0" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.045836 4660 scope.go:117] "RemoveContainer" containerID="333d2c6eee8ce4eebddf3543f9a5cc4618c0fa44d87cd340a5e1393fbbc3be48" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.073914 4660 scope.go:117] "RemoveContainer" containerID="34deb3e19ea9685d01d7367cdeaa5ead3e712cdd0c4ca5152e2bd67a8535fec9" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.097750 4660 scope.go:117] "RemoveContainer" containerID="c02a787a7bbcd2192279fafd7967845e8461147ecc404d9d4e51d7a04a12d624" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120387 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-combined-ca-bundle\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120447 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-ceilometer-tls-certs\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120497 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-log-httpd\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120690 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdkw\" (UniqueName: \"kubernetes.io/projected/06b19520-8cb4-433c-b0ef-b252d4501bfb-kube-api-access-2rdkw\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120740 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-sg-core-conf-yaml\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120800 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-run-httpd\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120832 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-config-data\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.120851 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-scripts\") pod \"06b19520-8cb4-433c-b0ef-b252d4501bfb\" (UID: \"06b19520-8cb4-433c-b0ef-b252d4501bfb\") " Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.122042 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.126325 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-scripts" (OuterVolumeSpecName: "scripts") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.126553 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.126954 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b19520-8cb4-433c-b0ef-b252d4501bfb-kube-api-access-2rdkw" (OuterVolumeSpecName: "kube-api-access-2rdkw") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "kube-api-access-2rdkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: E1129 07:42:06.174813 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.177873 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.207408 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.222939 4660 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.222970 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.222982 4660 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.222993 4660 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06b19520-8cb4-433c-b0ef-b252d4501bfb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.223001 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdkw\" (UniqueName: \"kubernetes.io/projected/06b19520-8cb4-433c-b0ef-b252d4501bfb-kube-api-access-2rdkw\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.223009 4660 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.245762 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.261382 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-config-data" (OuterVolumeSpecName: "config-data") pod "06b19520-8cb4-433c-b0ef-b252d4501bfb" (UID: "06b19520-8cb4-433c-b0ef-b252d4501bfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.324403 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:06 crc kubenswrapper[4660]: I1129 07:42:06.324437 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b19520-8cb4-433c-b0ef-b252d4501bfb-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.024259 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.026938 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" exitCode=0 Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.027002 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c"} Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.027051 4660 scope.go:117] "RemoveContainer" containerID="7722213ef79c3c66cb7ac343ca03425de7ecbfb47f9db3895575925b4ea79e47" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.027834 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:42:07 crc kubenswrapper[4660]: E1129 07:42:07.028515 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.146540 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.174646 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.182877 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:42:07 crc kubenswrapper[4660]: E1129 07:42:07.183232 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-central-agent" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183249 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-central-agent" Nov 29 07:42:07 crc kubenswrapper[4660]: E1129 07:42:07.183264 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-notification-agent" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183271 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-notification-agent" Nov 29 07:42:07 crc kubenswrapper[4660]: E1129 07:42:07.183283 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="sg-core" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183290 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="sg-core" Nov 29 07:42:07 crc kubenswrapper[4660]: E1129 07:42:07.183310 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="proxy-httpd" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183315 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="proxy-httpd" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183477 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-central-agent" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183491 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="sg-core" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183503 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="proxy-httpd" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.183518 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" containerName="ceilometer-notification-agent" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.185263 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.190799 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.196383 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.198072 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.209520 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344589 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2j4j\" (UniqueName: \"kubernetes.io/projected/36f0935c-a7a6-45af-a5ec-69aeb489ffac-kube-api-access-h2j4j\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344648 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344672 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344722 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-config-data\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344750 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-scripts\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344766 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344783 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-log-httpd\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.344831 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-run-httpd\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446682 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-run-httpd\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446753 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2j4j\" (UniqueName: \"kubernetes.io/projected/36f0935c-a7a6-45af-a5ec-69aeb489ffac-kube-api-access-h2j4j\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446773 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446798 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446844 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-config-data\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446871 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-scripts\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446885 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.446901 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-log-httpd\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.447292 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-log-httpd\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.447686 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-run-httpd\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.452331 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.457142 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-scripts\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.460722 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.461716 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.457398 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-config-data\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.464055 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2j4j\" (UniqueName: \"kubernetes.io/projected/36f0935c-a7a6-45af-a5ec-69aeb489ffac-kube-api-access-h2j4j\") pod \"ceilometer-0\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.501985 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:42:07 crc kubenswrapper[4660]: I1129 07:42:07.730970 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b19520-8cb4-433c-b0ef-b252d4501bfb" path="/var/lib/kubelet/pods/06b19520-8cb4-433c-b0ef-b252d4501bfb/volumes" Nov 29 07:42:08 crc kubenswrapper[4660]: I1129 07:42:08.008286 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:42:08 crc kubenswrapper[4660]: W1129 07:42:08.012448 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36f0935c_a7a6_45af_a5ec_69aeb489ffac.slice/crio-fa452aacf82beb9f9e76a22bbcf8386b8bb65fc36883c2a72705be9b284e9937 WatchSource:0}: Error finding container fa452aacf82beb9f9e76a22bbcf8386b8bb65fc36883c2a72705be9b284e9937: Status 404 returned error can't find the container with id fa452aacf82beb9f9e76a22bbcf8386b8bb65fc36883c2a72705be9b284e9937 Nov 29 07:42:08 crc kubenswrapper[4660]: I1129 07:42:08.040870 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d94b6" event={"ID":"2b78cef7-6b8a-453d-8b2b-7084b5dfd945","Type":"ContainerStarted","Data":"388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96"} Nov 29 07:42:08 crc kubenswrapper[4660]: I1129 07:42:08.042773 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerStarted","Data":"fa452aacf82beb9f9e76a22bbcf8386b8bb65fc36883c2a72705be9b284e9937"} Nov 29 07:42:09 crc kubenswrapper[4660]: I1129 07:42:09.053860 4660 generic.go:334] "Generic (PLEG): container finished" podID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerID="388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96" exitCode=0 Nov 29 07:42:09 crc kubenswrapper[4660]: I1129 07:42:09.053945 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d94b6" event={"ID":"2b78cef7-6b8a-453d-8b2b-7084b5dfd945","Type":"ContainerDied","Data":"388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96"} Nov 29 07:42:11 crc kubenswrapper[4660]: I1129 07:42:11.069128 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerStarted","Data":"34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7"} Nov 29 07:42:11 crc kubenswrapper[4660]: I1129 07:42:11.070855 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d94b6" event={"ID":"2b78cef7-6b8a-453d-8b2b-7084b5dfd945","Type":"ContainerStarted","Data":"b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c"} Nov 29 07:42:13 crc kubenswrapper[4660]: I1129 07:42:13.429951 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:42:13 crc kubenswrapper[4660]: I1129 07:42:13.430528 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:42:14 crc kubenswrapper[4660]: I1129 07:42:14.095981 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerStarted","Data":"151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf"} Nov 29 07:42:14 crc kubenswrapper[4660]: I1129 07:42:14.476278 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-d94b6" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="registry-server" probeResult="failure" output=< Nov 29 07:42:14 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:42:14 crc kubenswrapper[4660]: > Nov 29 07:42:14 crc kubenswrapper[4660]: I1129 07:42:14.722883 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d94b6" podStartSLOduration=14.982756467 podStartE2EDuration="21.722861998s" podCreationTimestamp="2025-11-29 07:41:53 +0000 UTC" firstStartedPulling="2025-11-29 07:42:03.991124762 +0000 UTC m=+1614.544654671" lastFinishedPulling="2025-11-29 07:42:10.731230303 +0000 UTC m=+1621.284760202" observedRunningTime="2025-11-29 07:42:11.093924796 +0000 UTC m=+1621.647454705" watchObservedRunningTime="2025-11-29 07:42:14.722861998 +0000 UTC m=+1625.276391897" Nov 29 07:42:15 crc kubenswrapper[4660]: I1129 07:42:15.105301 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerStarted","Data":"2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674"} Nov 29 07:42:17 crc kubenswrapper[4660]: I1129 07:42:17.123452 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerStarted","Data":"c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21"} Nov 29 07:42:17 crc kubenswrapper[4660]: I1129 07:42:17.123889 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:42:17 crc kubenswrapper[4660]: I1129 07:42:17.125047 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kllk5" event={"ID":"8d711e60-e860-4ba2-aa3c-a8219218cd8e","Type":"ContainerStarted","Data":"3fae29d4663d2aa9eac693cec9866e8b43aeb879305b90aadac97f32893cedd9"} Nov 29 07:42:17 crc kubenswrapper[4660]: I1129 07:42:17.160769 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7000310440000002 podStartE2EDuration="10.160743519s" podCreationTimestamp="2025-11-29 07:42:07 +0000 UTC" firstStartedPulling="2025-11-29 07:42:08.014079994 +0000 UTC m=+1618.567609893" lastFinishedPulling="2025-11-29 07:42:16.474792459 +0000 UTC m=+1627.028322368" observedRunningTime="2025-11-29 07:42:17.150709564 +0000 UTC m=+1627.704239483" watchObservedRunningTime="2025-11-29 07:42:17.160743519 +0000 UTC m=+1627.714273418" Nov 29 07:42:17 crc kubenswrapper[4660]: I1129 07:42:17.174780 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kllk5" podStartSLOduration=2.5516378189999998 podStartE2EDuration="35.174755616s" podCreationTimestamp="2025-11-29 07:41:42 +0000 UTC" firstStartedPulling="2025-11-29 07:41:43.439793249 +0000 UTC m=+1593.993323148" lastFinishedPulling="2025-11-29 07:42:16.062911046 +0000 UTC m=+1626.616440945" observedRunningTime="2025-11-29 07:42:17.170789116 +0000 UTC m=+1627.724319025" watchObservedRunningTime="2025-11-29 07:42:17.174755616 +0000 UTC m=+1627.728285525" Nov 29 07:42:22 crc kubenswrapper[4660]: I1129 07:42:22.693454 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:42:22 crc kubenswrapper[4660]: E1129 07:42:22.695197 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:42:23 crc kubenswrapper[4660]: I1129 07:42:23.478070 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:42:23 crc kubenswrapper[4660]: I1129 07:42:23.532104 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:42:24 crc kubenswrapper[4660]: I1129 07:42:24.304905 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d94b6"] Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.198359 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d94b6" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="registry-server" containerID="cri-o://b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c" gracePeriod=2 Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.733917 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.920113 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-catalog-content\") pod \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.920265 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-utilities\") pod \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.920474 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5l72\" (UniqueName: \"kubernetes.io/projected/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-kube-api-access-j5l72\") pod \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\" (UID: \"2b78cef7-6b8a-453d-8b2b-7084b5dfd945\") " Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.923447 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-utilities" (OuterVolumeSpecName: "utilities") pod "2b78cef7-6b8a-453d-8b2b-7084b5dfd945" (UID: "2b78cef7-6b8a-453d-8b2b-7084b5dfd945"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.938239 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b78cef7-6b8a-453d-8b2b-7084b5dfd945" (UID: "2b78cef7-6b8a-453d-8b2b-7084b5dfd945"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:42:25 crc kubenswrapper[4660]: I1129 07:42:25.943455 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-kube-api-access-j5l72" (OuterVolumeSpecName: "kube-api-access-j5l72") pod "2b78cef7-6b8a-453d-8b2b-7084b5dfd945" (UID: "2b78cef7-6b8a-453d-8b2b-7084b5dfd945"). InnerVolumeSpecName "kube-api-access-j5l72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.022990 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.023078 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.023099 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5l72\" (UniqueName: \"kubernetes.io/projected/2b78cef7-6b8a-453d-8b2b-7084b5dfd945-kube-api-access-j5l72\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.219473 4660 generic.go:334] "Generic (PLEG): container finished" podID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerID="b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c" exitCode=0 Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.219529 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d94b6" event={"ID":"2b78cef7-6b8a-453d-8b2b-7084b5dfd945","Type":"ContainerDied","Data":"b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c"} Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.219563 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d94b6" event={"ID":"2b78cef7-6b8a-453d-8b2b-7084b5dfd945","Type":"ContainerDied","Data":"c604de13c9fb92e0809c0c472f3de3013fc13223d73f4fb9bf20c8fa130ea84c"} Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.219585 4660 scope.go:117] "RemoveContainer" containerID="b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.219792 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d94b6" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.257052 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d94b6"] Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.261362 4660 scope.go:117] "RemoveContainer" containerID="388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.269415 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d94b6"] Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.285251 4660 scope.go:117] "RemoveContainer" containerID="98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.332461 4660 scope.go:117] "RemoveContainer" containerID="b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c" Nov 29 07:42:26 crc kubenswrapper[4660]: E1129 07:42:26.332936 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c\": container with ID starting with b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c not found: ID does not exist" containerID="b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.332993 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c"} err="failed to get container status \"b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c\": rpc error: code = NotFound desc = could not find container \"b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c\": container with ID starting with b15574ae938128d82a09e1485f5ac89e792253ea66958d10317547226cfb1c9c not found: ID does not exist" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.333017 4660 scope.go:117] "RemoveContainer" containerID="388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96" Nov 29 07:42:26 crc kubenswrapper[4660]: E1129 07:42:26.333505 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96\": container with ID starting with 388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96 not found: ID does not exist" containerID="388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.333540 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96"} err="failed to get container status \"388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96\": rpc error: code = NotFound desc = could not find container \"388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96\": container with ID starting with 388d8aeb96c8fbd494f5fd720ce8cb420d5eca8792da8638c2bfd08b1f950c96 not found: ID does not exist" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.333567 4660 scope.go:117] "RemoveContainer" containerID="98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b" Nov 29 07:42:26 crc kubenswrapper[4660]: E1129 07:42:26.334022 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b\": container with ID starting with 98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b not found: ID does not exist" containerID="98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b" Nov 29 07:42:26 crc kubenswrapper[4660]: I1129 07:42:26.334079 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b"} err="failed to get container status \"98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b\": rpc error: code = NotFound desc = could not find container \"98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b\": container with ID starting with 98817631d6aa88488c58a74c95ddb9dae737b41d9b3ab454938abac08d7f819b not found: ID does not exist" Nov 29 07:42:27 crc kubenswrapper[4660]: I1129 07:42:27.706991 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" path="/var/lib/kubelet/pods/2b78cef7-6b8a-453d-8b2b-7084b5dfd945/volumes" Nov 29 07:42:35 crc kubenswrapper[4660]: I1129 07:42:35.521191 4660 generic.go:334] "Generic (PLEG): container finished" podID="8d711e60-e860-4ba2-aa3c-a8219218cd8e" containerID="3fae29d4663d2aa9eac693cec9866e8b43aeb879305b90aadac97f32893cedd9" exitCode=0 Nov 29 07:42:35 crc kubenswrapper[4660]: I1129 07:42:35.521247 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kllk5" event={"ID":"8d711e60-e860-4ba2-aa3c-a8219218cd8e","Type":"ContainerDied","Data":"3fae29d4663d2aa9eac693cec9866e8b43aeb879305b90aadac97f32893cedd9"} Nov 29 07:42:35 crc kubenswrapper[4660]: I1129 07:42:35.693235 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:42:35 crc kubenswrapper[4660]: E1129 07:42:35.693577 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:42:36 crc kubenswrapper[4660]: I1129 07:42:36.892172 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.064391 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-scripts\") pod \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.064465 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-config-data\") pod \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.064640 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvptb\" (UniqueName: \"kubernetes.io/projected/8d711e60-e860-4ba2-aa3c-a8219218cd8e-kube-api-access-rvptb\") pod \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.064753 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-combined-ca-bundle\") pod \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\" (UID: \"8d711e60-e860-4ba2-aa3c-a8219218cd8e\") " Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.070361 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d711e60-e860-4ba2-aa3c-a8219218cd8e-kube-api-access-rvptb" (OuterVolumeSpecName: "kube-api-access-rvptb") pod "8d711e60-e860-4ba2-aa3c-a8219218cd8e" (UID: "8d711e60-e860-4ba2-aa3c-a8219218cd8e"). InnerVolumeSpecName "kube-api-access-rvptb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.078979 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-scripts" (OuterVolumeSpecName: "scripts") pod "8d711e60-e860-4ba2-aa3c-a8219218cd8e" (UID: "8d711e60-e860-4ba2-aa3c-a8219218cd8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.100781 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d711e60-e860-4ba2-aa3c-a8219218cd8e" (UID: "8d711e60-e860-4ba2-aa3c-a8219218cd8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.101396 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-config-data" (OuterVolumeSpecName: "config-data") pod "8d711e60-e860-4ba2-aa3c-a8219218cd8e" (UID: "8d711e60-e860-4ba2-aa3c-a8219218cd8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.167707 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.167751 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvptb\" (UniqueName: \"kubernetes.io/projected/8d711e60-e860-4ba2-aa3c-a8219218cd8e-kube-api-access-rvptb\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.167769 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.167782 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d711e60-e860-4ba2-aa3c-a8219218cd8e-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.511709 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.544407 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kllk5" event={"ID":"8d711e60-e860-4ba2-aa3c-a8219218cd8e","Type":"ContainerDied","Data":"f1f4319a2f5086256c5146fa1b22e91768dd4281c92d757a1e3dda43ed62a425"} Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.544676 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1f4319a2f5086256c5146fa1b22e91768dd4281c92d757a1e3dda43ed62a425" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.544472 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kllk5" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.674446 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:42:37 crc kubenswrapper[4660]: E1129 07:42:37.674968 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="registry-server" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.674993 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="registry-server" Nov 29 07:42:37 crc kubenswrapper[4660]: E1129 07:42:37.675009 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="extract-utilities" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.675016 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="extract-utilities" Nov 29 07:42:37 crc kubenswrapper[4660]: E1129 07:42:37.675024 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="extract-content" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.675030 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="extract-content" Nov 29 07:42:37 crc kubenswrapper[4660]: E1129 07:42:37.675054 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d711e60-e860-4ba2-aa3c-a8219218cd8e" containerName="nova-cell0-conductor-db-sync" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.675060 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d711e60-e860-4ba2-aa3c-a8219218cd8e" containerName="nova-cell0-conductor-db-sync" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.675244 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b78cef7-6b8a-453d-8b2b-7084b5dfd945" containerName="registry-server" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.675272 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d711e60-e860-4ba2-aa3c-a8219218cd8e" containerName="nova-cell0-conductor-db-sync" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.675874 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.680277 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.680517 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n7tf5" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.726920 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.777245 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcc6g\" (UniqueName: \"kubernetes.io/projected/fd768c12-7e2d-4283-a390-0f17185cb3ca-kube-api-access-vcc6g\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.778369 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd768c12-7e2d-4283-a390-0f17185cb3ca-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.778395 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd768c12-7e2d-4283-a390-0f17185cb3ca-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.880028 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcc6g\" (UniqueName: \"kubernetes.io/projected/fd768c12-7e2d-4283-a390-0f17185cb3ca-kube-api-access-vcc6g\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.880098 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd768c12-7e2d-4283-a390-0f17185cb3ca-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:37 crc kubenswrapper[4660]: I1129 07:42:37.880124 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd768c12-7e2d-4283-a390-0f17185cb3ca-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:38 crc kubenswrapper[4660]: I1129 07:42:38.260413 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd768c12-7e2d-4283-a390-0f17185cb3ca-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:38 crc kubenswrapper[4660]: I1129 07:42:38.261045 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd768c12-7e2d-4283-a390-0f17185cb3ca-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:38 crc kubenswrapper[4660]: I1129 07:42:38.261307 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcc6g\" (UniqueName: \"kubernetes.io/projected/fd768c12-7e2d-4283-a390-0f17185cb3ca-kube-api-access-vcc6g\") pod \"nova-cell0-conductor-0\" (UID: \"fd768c12-7e2d-4283-a390-0f17185cb3ca\") " pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:38 crc kubenswrapper[4660]: I1129 07:42:38.298235 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:38 crc kubenswrapper[4660]: I1129 07:42:38.875368 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 29 07:42:39 crc kubenswrapper[4660]: I1129 07:42:39.581849 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd768c12-7e2d-4283-a390-0f17185cb3ca","Type":"ContainerStarted","Data":"805bc1aac633af10200ade320780cdadcf1fa988db20578095edd5dca22def43"} Nov 29 07:42:39 crc kubenswrapper[4660]: I1129 07:42:39.581906 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd768c12-7e2d-4283-a390-0f17185cb3ca","Type":"ContainerStarted","Data":"b64206ec92dae6af333a55c856e1e4cbd896081779fc8feca477088a645bca3f"} Nov 29 07:42:39 crc kubenswrapper[4660]: I1129 07:42:39.583316 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:39 crc kubenswrapper[4660]: I1129 07:42:39.612946 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.612925596 podStartE2EDuration="2.612925596s" podCreationTimestamp="2025-11-29 07:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:42:39.611595272 +0000 UTC m=+1650.165125171" watchObservedRunningTime="2025-11-29 07:42:39.612925596 +0000 UTC m=+1650.166455495" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.326507 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.694118 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:42:48 crc kubenswrapper[4660]: E1129 07:42:48.694323 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.769033 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bgj57"] Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.770378 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.773065 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.780329 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.786859 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-scripts\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.787229 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-config-data\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.787389 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.787436 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4ct9\" (UniqueName: \"kubernetes.io/projected/346088fe-bc54-45fa-95b1-7264614a2988-kube-api-access-k4ct9\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.800227 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bgj57"] Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.889141 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-config-data\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.889244 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.889268 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4ct9\" (UniqueName: \"kubernetes.io/projected/346088fe-bc54-45fa-95b1-7264614a2988-kube-api-access-k4ct9\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.889304 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-scripts\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.895698 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-scripts\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.898140 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-config-data\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.898162 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:48 crc kubenswrapper[4660]: I1129 07:42:48.932131 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4ct9\" (UniqueName: \"kubernetes.io/projected/346088fe-bc54-45fa-95b1-7264614a2988-kube-api-access-k4ct9\") pod \"nova-cell0-cell-mapping-bgj57\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:49 crc kubenswrapper[4660]: I1129 07:42:49.103964 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.339667 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.341315 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.347160 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.355335 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.537195 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflbc\" (UniqueName: \"kubernetes.io/projected/b1955589-00bc-4c74-9f66-e1a37e5e245d-kube-api-access-bflbc\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.537549 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.537668 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-config-data\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.596822 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.598005 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.635414 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.638865 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bflbc\" (UniqueName: \"kubernetes.io/projected/b1955589-00bc-4c74-9f66-e1a37e5e245d-kube-api-access-bflbc\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.638928 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.638993 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-config-data\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.647259 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.656922 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.678348 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bflbc\" (UniqueName: \"kubernetes.io/projected/b1955589-00bc-4c74-9f66-e1a37e5e245d-kube-api-access-bflbc\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.740294 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-config-data\") pod \"nova-scheduler-0\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " pod="openstack/nova-scheduler-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.741127 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6nzk\" (UniqueName: \"kubernetes.io/projected/818c3f20-b884-4578-980d-cccc395cbfcb-kube-api-access-n6nzk\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.741181 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.741211 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.743828 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.751358 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.767097 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.798110 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.845915 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6nzk\" (UniqueName: \"kubernetes.io/projected/818c3f20-b884-4578-980d-cccc395cbfcb-kube-api-access-n6nzk\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.845986 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.846024 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.864229 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.865746 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.885827 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.887476 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.906289 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6nzk\" (UniqueName: \"kubernetes.io/projected/818c3f20-b884-4578-980d-cccc395cbfcb-kube-api-access-n6nzk\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.936551 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.944272 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.948802 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-config-data\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.948882 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-logs\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.948932 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:50 crc kubenswrapper[4660]: I1129 07:42:50.948994 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6t7k\" (UniqueName: \"kubernetes.io/projected/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-kube-api-access-z6t7k\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.013690 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.025845 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2gt82"] Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.027264 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.049961 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-config-data\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.050029 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.050068 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36deda83-a455-4c5a-8166-dec5af417463-logs\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.050092 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.050118 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6t7k\" (UniqueName: \"kubernetes.io/projected/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-kube-api-access-z6t7k\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.050161 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-config-data\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.050206 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-logs\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.050223 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj6w\" (UniqueName: \"kubernetes.io/projected/36deda83-a455-4c5a-8166-dec5af417463-kube-api-access-hcj6w\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.053981 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-logs\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.056378 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2gt82"] Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.068782 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-config-data\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.070713 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.086984 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6t7k\" (UniqueName: \"kubernetes.io/projected/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-kube-api-access-z6t7k\") pod \"nova-api-0\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.103477 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.120258 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.155537 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.155817 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.155869 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.155918 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-config\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.155959 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlwc\" (UniqueName: \"kubernetes.io/projected/6136ba9b-f915-4d91-b474-229e02f382a2-kube-api-access-tvlwc\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.155997 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj6w\" (UniqueName: \"kubernetes.io/projected/36deda83-a455-4c5a-8166-dec5af417463-kube-api-access-hcj6w\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.156022 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-config-data\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.156054 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-svc\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.156075 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.156144 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36deda83-a455-4c5a-8166-dec5af417463-logs\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.156551 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36deda83-a455-4c5a-8166-dec5af417463-logs\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.170202 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.173909 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-config-data\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.184076 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj6w\" (UniqueName: \"kubernetes.io/projected/36deda83-a455-4c5a-8166-dec5af417463-kube-api-access-hcj6w\") pod \"nova-metadata-0\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.260438 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.260505 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-config\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.260535 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlwc\" (UniqueName: \"kubernetes.io/projected/6136ba9b-f915-4d91-b474-229e02f382a2-kube-api-access-tvlwc\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.260579 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-svc\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.260595 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.260690 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.261628 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.262200 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.263324 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-config\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.263334 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-svc\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.264029 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.264106 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.290302 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlwc\" (UniqueName: \"kubernetes.io/projected/6136ba9b-f915-4d91-b474-229e02f382a2-kube-api-access-tvlwc\") pod \"dnsmasq-dns-bccf8f775-2gt82\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.383648 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.513532 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bgj57"] Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.817662 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.830833 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bgj57" event={"ID":"346088fe-bc54-45fa-95b1-7264614a2988","Type":"ContainerStarted","Data":"04916873d4dee6573056c6e7e16b04e4eb39f7cdf924a51f60036a5b241c73ab"} Nov 29 07:42:51 crc kubenswrapper[4660]: I1129 07:42:51.851063 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1955589-00bc-4c74-9f66-e1a37e5e245d","Type":"ContainerStarted","Data":"c8b116beecbc32f60815e7211da39e3727649bed6c2dc4c6f49aef5d43db07dd"} Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.283366 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.400682 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pbghl"] Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.401940 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.404472 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.404731 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.450853 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pbghl"] Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.494085 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2gt82"] Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.499843 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt466\" (UniqueName: \"kubernetes.io/projected/7e208f39-0b45-484b-9bfb-9b0747126b84-kube-api-access-tt466\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.499928 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-scripts\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.499986 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-config-data\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.500082 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.551288 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:42:52 crc kubenswrapper[4660]: W1129 07:42:52.573349 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36deda83_a455_4c5a_8166_dec5af417463.slice/crio-1de8a1189918b512f999e911a8564f9d6c829cc7e2ac1ba8636fed76e6cd8243 WatchSource:0}: Error finding container 1de8a1189918b512f999e911a8564f9d6c829cc7e2ac1ba8636fed76e6cd8243: Status 404 returned error can't find the container with id 1de8a1189918b512f999e911a8564f9d6c829cc7e2ac1ba8636fed76e6cd8243 Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.603883 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt466\" (UniqueName: \"kubernetes.io/projected/7e208f39-0b45-484b-9bfb-9b0747126b84-kube-api-access-tt466\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.603966 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-scripts\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.604012 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-config-data\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.617062 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.619445 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-config-data\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.626201 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.628015 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-scripts\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.648310 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt466\" (UniqueName: \"kubernetes.io/projected/7e208f39-0b45-484b-9bfb-9b0747126b84-kube-api-access-tt466\") pod \"nova-cell1-conductor-db-sync-pbghl\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.772356 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.899294 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.926711 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36deda83-a455-4c5a-8166-dec5af417463","Type":"ContainerStarted","Data":"1de8a1189918b512f999e911a8564f9d6c829cc7e2ac1ba8636fed76e6cd8243"} Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.936713 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1","Type":"ContainerStarted","Data":"3ab37e15643c93ce6a16055cc91d9350e1a83ac62e09fc0698c3ce43fbc68e05"} Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.977038 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" event={"ID":"6136ba9b-f915-4d91-b474-229e02f382a2","Type":"ContainerStarted","Data":"57ae512818a07773bd1efe35ac73e5361eed6e10a97d2c4c2c323692e8dcd91f"} Nov 29 07:42:52 crc kubenswrapper[4660]: I1129 07:42:52.988938 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"818c3f20-b884-4578-980d-cccc395cbfcb","Type":"ContainerStarted","Data":"35e700af30e981105d3f8073cb1d9eb27cc5b4e1a1f56b3de55db6571fd3a744"} Nov 29 07:42:53 crc kubenswrapper[4660]: I1129 07:42:53.007654 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bgj57" event={"ID":"346088fe-bc54-45fa-95b1-7264614a2988","Type":"ContainerStarted","Data":"8da211a0ec399596effa2ceccb3e41bd54d57bc3950675de040c5d4e1ddf623a"} Nov 29 07:42:53 crc kubenswrapper[4660]: I1129 07:42:53.039161 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bgj57" podStartSLOduration=5.039142525 podStartE2EDuration="5.039142525s" podCreationTimestamp="2025-11-29 07:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:42:53.026912373 +0000 UTC m=+1663.580442272" watchObservedRunningTime="2025-11-29 07:42:53.039142525 +0000 UTC m=+1663.592672424" Nov 29 07:42:53 crc kubenswrapper[4660]: I1129 07:42:53.412901 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pbghl"] Nov 29 07:42:54 crc kubenswrapper[4660]: I1129 07:42:54.021735 4660 generic.go:334] "Generic (PLEG): container finished" podID="6136ba9b-f915-4d91-b474-229e02f382a2" containerID="959dc84fbe8479b5287bb18d41813a7ac01e1627a19dc4cd1be3f0f5f6d49f36" exitCode=0 Nov 29 07:42:54 crc kubenswrapper[4660]: I1129 07:42:54.021841 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" event={"ID":"6136ba9b-f915-4d91-b474-229e02f382a2","Type":"ContainerDied","Data":"959dc84fbe8479b5287bb18d41813a7ac01e1627a19dc4cd1be3f0f5f6d49f36"} Nov 29 07:42:54 crc kubenswrapper[4660]: I1129 07:42:54.025152 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pbghl" event={"ID":"7e208f39-0b45-484b-9bfb-9b0747126b84","Type":"ContainerStarted","Data":"2025550a58bb24ceca22edb6c65cc03c22fefc72a454efba09b009650479bb20"} Nov 29 07:42:54 crc kubenswrapper[4660]: I1129 07:42:54.025201 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pbghl" event={"ID":"7e208f39-0b45-484b-9bfb-9b0747126b84","Type":"ContainerStarted","Data":"0774acf1299cfb1d978c45ab64fd71270ce436b5d4f8aa9294426ec7d8911a83"} Nov 29 07:42:55 crc kubenswrapper[4660]: I1129 07:42:55.083087 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:42:55 crc kubenswrapper[4660]: I1129 07:42:55.101802 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:42:55 crc kubenswrapper[4660]: I1129 07:42:55.115500 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pbghl" podStartSLOduration=3.115482864 podStartE2EDuration="3.115482864s" podCreationTimestamp="2025-11-29 07:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:42:55.061957332 +0000 UTC m=+1665.615487231" watchObservedRunningTime="2025-11-29 07:42:55.115482864 +0000 UTC m=+1665.669012763" Nov 29 07:42:56 crc kubenswrapper[4660]: I1129 07:42:56.051878 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" event={"ID":"6136ba9b-f915-4d91-b474-229e02f382a2","Type":"ContainerStarted","Data":"e47b38093f45d74b42130e1e15cbe990d777fb72cece1fad7f8cf16880af8ea2"} Nov 29 07:42:56 crc kubenswrapper[4660]: I1129 07:42:56.052207 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:42:56 crc kubenswrapper[4660]: I1129 07:42:56.077843 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" podStartSLOduration=6.077820438 podStartE2EDuration="6.077820438s" podCreationTimestamp="2025-11-29 07:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:42:56.067736622 +0000 UTC m=+1666.621266541" watchObservedRunningTime="2025-11-29 07:42:56.077820438 +0000 UTC m=+1666.631350337" Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.386066 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.694009 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:43:01 crc kubenswrapper[4660]: E1129 07:43:01.694845 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.799252 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dppj9"] Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.803442 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.836619 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dppj9"] Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.902462 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xj7bl"] Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.917881 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" podUID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerName="dnsmasq-dns" containerID="cri-o://6312fdc8b886c1b30c0d7fac3db759b94b2025028543912fd69aaea68375a7a2" gracePeriod=10 Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.971009 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-utilities\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.971261 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-catalog-content\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:01 crc kubenswrapper[4660]: I1129 07:43:01.971370 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pl7g\" (UniqueName: \"kubernetes.io/projected/460dcc75-2002-47dd-a296-c8ca3f25e039-kube-api-access-6pl7g\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:02 crc kubenswrapper[4660]: I1129 07:43:02.072833 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-utilities\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:02 crc kubenswrapper[4660]: I1129 07:43:02.072887 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-catalog-content\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:02 crc kubenswrapper[4660]: I1129 07:43:02.072933 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pl7g\" (UniqueName: \"kubernetes.io/projected/460dcc75-2002-47dd-a296-c8ca3f25e039-kube-api-access-6pl7g\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:02 crc kubenswrapper[4660]: I1129 07:43:02.073333 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-catalog-content\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:02 crc kubenswrapper[4660]: I1129 07:43:02.073504 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-utilities\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:02 crc kubenswrapper[4660]: I1129 07:43:02.090095 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pl7g\" (UniqueName: \"kubernetes.io/projected/460dcc75-2002-47dd-a296-c8ca3f25e039-kube-api-access-6pl7g\") pod \"certified-operators-dppj9\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:02 crc kubenswrapper[4660]: I1129 07:43:02.124654 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:03 crc kubenswrapper[4660]: I1129 07:43:03.115642 4660 generic.go:334] "Generic (PLEG): container finished" podID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerID="6312fdc8b886c1b30c0d7fac3db759b94b2025028543912fd69aaea68375a7a2" exitCode=0 Nov 29 07:43:03 crc kubenswrapper[4660]: I1129 07:43:03.115734 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" event={"ID":"d5cded14-8a67-4297-b354-a7ed6aa91e74","Type":"ContainerDied","Data":"6312fdc8b886c1b30c0d7fac3db759b94b2025028543912fd69aaea68375a7a2"} Nov 29 07:43:03 crc kubenswrapper[4660]: I1129 07:43:03.861068 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.011783 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-config\") pod \"d5cded14-8a67-4297-b354-a7ed6aa91e74\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.012079 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-svc\") pod \"d5cded14-8a67-4297-b354-a7ed6aa91e74\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.012133 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-swift-storage-0\") pod \"d5cded14-8a67-4297-b354-a7ed6aa91e74\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.012187 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-sb\") pod \"d5cded14-8a67-4297-b354-a7ed6aa91e74\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.012244 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n247f\" (UniqueName: \"kubernetes.io/projected/d5cded14-8a67-4297-b354-a7ed6aa91e74-kube-api-access-n247f\") pod \"d5cded14-8a67-4297-b354-a7ed6aa91e74\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.012283 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-nb\") pod \"d5cded14-8a67-4297-b354-a7ed6aa91e74\" (UID: \"d5cded14-8a67-4297-b354-a7ed6aa91e74\") " Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.045698 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cded14-8a67-4297-b354-a7ed6aa91e74-kube-api-access-n247f" (OuterVolumeSpecName: "kube-api-access-n247f") pod "d5cded14-8a67-4297-b354-a7ed6aa91e74" (UID: "d5cded14-8a67-4297-b354-a7ed6aa91e74"). InnerVolumeSpecName "kube-api-access-n247f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.122954 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n247f\" (UniqueName: \"kubernetes.io/projected/d5cded14-8a67-4297-b354-a7ed6aa91e74-kube-api-access-n247f\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.136825 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" event={"ID":"d5cded14-8a67-4297-b354-a7ed6aa91e74","Type":"ContainerDied","Data":"c2c9bd784dccce5e514e62018af950ab21c6c6904df77f72367096842d0c0d37"} Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.136878 4660 scope.go:117] "RemoveContainer" containerID="6312fdc8b886c1b30c0d7fac3db759b94b2025028543912fd69aaea68375a7a2" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.137049 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xj7bl" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.169021 4660 scope.go:117] "RemoveContainer" containerID="3fde0db0485b916482161c15d71856e3dbbc5907376cf19e50f16fa8b1c20dc3" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.355617 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5cded14-8a67-4297-b354-a7ed6aa91e74" (UID: "d5cded14-8a67-4297-b354-a7ed6aa91e74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.369288 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-config" (OuterVolumeSpecName: "config") pod "d5cded14-8a67-4297-b354-a7ed6aa91e74" (UID: "d5cded14-8a67-4297-b354-a7ed6aa91e74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.378260 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d5cded14-8a67-4297-b354-a7ed6aa91e74" (UID: "d5cded14-8a67-4297-b354-a7ed6aa91e74"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.390994 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5cded14-8a67-4297-b354-a7ed6aa91e74" (UID: "d5cded14-8a67-4297-b354-a7ed6aa91e74"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.402864 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5cded14-8a67-4297-b354-a7ed6aa91e74" (UID: "d5cded14-8a67-4297-b354-a7ed6aa91e74"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.432750 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.432783 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.432793 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.432803 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.432812 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cded14-8a67-4297-b354-a7ed6aa91e74-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.466348 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dppj9"] Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.612320 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xj7bl"] Nov 29 07:43:04 crc kubenswrapper[4660]: I1129 07:43:04.631537 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xj7bl"] Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.149355 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"818c3f20-b884-4578-980d-cccc395cbfcb","Type":"ContainerStarted","Data":"a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.149438 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="818c3f20-b884-4578-980d-cccc395cbfcb" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd" gracePeriod=30 Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.152221 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1955589-00bc-4c74-9f66-e1a37e5e245d","Type":"ContainerStarted","Data":"665a409c77be3609c61bac2565451379ec8d1987565cf6667cfd10ba748a48f7"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.154936 4660 generic.go:334] "Generic (PLEG): container finished" podID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerID="a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1" exitCode=0 Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.154994 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dppj9" event={"ID":"460dcc75-2002-47dd-a296-c8ca3f25e039","Type":"ContainerDied","Data":"a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.155015 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dppj9" event={"ID":"460dcc75-2002-47dd-a296-c8ca3f25e039","Type":"ContainerStarted","Data":"322ff2ff5eca53b2d2f0bbdb5fb76ac90cca500711989b3f809865c7cbbd12a9"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.169980 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36deda83-a455-4c5a-8166-dec5af417463","Type":"ContainerStarted","Data":"5b47718376d1ccf864ea1118d0722d0bf988e9f8e0246811c82d7ffe4de28951"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.170026 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36deda83-a455-4c5a-8166-dec5af417463","Type":"ContainerStarted","Data":"405af9538d7326442610622454cafd3506f3ba1e5131102f6a801fa9abca1799"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.170032 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-log" containerID="cri-o://405af9538d7326442610622454cafd3506f3ba1e5131102f6a801fa9abca1799" gracePeriod=30 Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.170096 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-metadata" containerID="cri-o://5b47718376d1ccf864ea1118d0722d0bf988e9f8e0246811c82d7ffe4de28951" gracePeriod=30 Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.179569 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.742754863 podStartE2EDuration="15.179555556s" podCreationTimestamp="2025-11-29 07:42:50 +0000 UTC" firstStartedPulling="2025-11-29 07:42:52.321264382 +0000 UTC m=+1662.874794281" lastFinishedPulling="2025-11-29 07:43:03.758065075 +0000 UTC m=+1674.311594974" observedRunningTime="2025-11-29 07:43:05.175597695 +0000 UTC m=+1675.729127594" watchObservedRunningTime="2025-11-29 07:43:05.179555556 +0000 UTC m=+1675.733085455" Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.190088 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1","Type":"ContainerStarted","Data":"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.190138 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1","Type":"ContainerStarted","Data":"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e"} Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.208913 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.026099166 podStartE2EDuration="15.208890322s" podCreationTimestamp="2025-11-29 07:42:50 +0000 UTC" firstStartedPulling="2025-11-29 07:42:52.575166846 +0000 UTC m=+1663.128696745" lastFinishedPulling="2025-11-29 07:43:03.757958002 +0000 UTC m=+1674.311487901" observedRunningTime="2025-11-29 07:43:05.205315831 +0000 UTC m=+1675.758845740" watchObservedRunningTime="2025-11-29 07:43:05.208890322 +0000 UTC m=+1675.762420221" Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.231921 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.246179255 podStartE2EDuration="15.231904448s" podCreationTimestamp="2025-11-29 07:42:50 +0000 UTC" firstStartedPulling="2025-11-29 07:42:51.729022129 +0000 UTC m=+1662.282552028" lastFinishedPulling="2025-11-29 07:43:03.714747322 +0000 UTC m=+1674.268277221" observedRunningTime="2025-11-29 07:43:05.231129668 +0000 UTC m=+1675.784659567" watchObservedRunningTime="2025-11-29 07:43:05.231904448 +0000 UTC m=+1675.785434347" Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.295827 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.4062167500000005 podStartE2EDuration="15.295805944s" podCreationTimestamp="2025-11-29 07:42:50 +0000 UTC" firstStartedPulling="2025-11-29 07:42:52.868402829 +0000 UTC m=+1663.421932728" lastFinishedPulling="2025-11-29 07:43:03.757992023 +0000 UTC m=+1674.311521922" observedRunningTime="2025-11-29 07:43:05.293946767 +0000 UTC m=+1675.847476676" watchObservedRunningTime="2025-11-29 07:43:05.295805944 +0000 UTC m=+1675.849335843" Nov 29 07:43:05 crc kubenswrapper[4660]: I1129 07:43:05.703739 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cded14-8a67-4297-b354-a7ed6aa91e74" path="/var/lib/kubelet/pods/d5cded14-8a67-4297-b354-a7ed6aa91e74/volumes" Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.016954 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.104274 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.202167 4660 generic.go:334] "Generic (PLEG): container finished" podID="36deda83-a455-4c5a-8166-dec5af417463" containerID="5b47718376d1ccf864ea1118d0722d0bf988e9f8e0246811c82d7ffe4de28951" exitCode=0 Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.202193 4660 generic.go:334] "Generic (PLEG): container finished" podID="36deda83-a455-4c5a-8166-dec5af417463" containerID="405af9538d7326442610622454cafd3506f3ba1e5131102f6a801fa9abca1799" exitCode=143 Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.202225 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36deda83-a455-4c5a-8166-dec5af417463","Type":"ContainerDied","Data":"5b47718376d1ccf864ea1118d0722d0bf988e9f8e0246811c82d7ffe4de28951"} Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.202248 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36deda83-a455-4c5a-8166-dec5af417463","Type":"ContainerDied","Data":"405af9538d7326442610622454cafd3506f3ba1e5131102f6a801fa9abca1799"} Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.203369 4660 generic.go:334] "Generic (PLEG): container finished" podID="346088fe-bc54-45fa-95b1-7264614a2988" containerID="8da211a0ec399596effa2ceccb3e41bd54d57bc3950675de040c5d4e1ddf623a" exitCode=0 Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.204107 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bgj57" event={"ID":"346088fe-bc54-45fa-95b1-7264614a2988","Type":"ContainerDied","Data":"8da211a0ec399596effa2ceccb3e41bd54d57bc3950675de040c5d4e1ddf623a"} Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.265155 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.265197 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:43:06 crc kubenswrapper[4660]: I1129 07:43:06.856246 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.004013 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-combined-ca-bundle\") pod \"36deda83-a455-4c5a-8166-dec5af417463\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.004465 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcj6w\" (UniqueName: \"kubernetes.io/projected/36deda83-a455-4c5a-8166-dec5af417463-kube-api-access-hcj6w\") pod \"36deda83-a455-4c5a-8166-dec5af417463\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.004500 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36deda83-a455-4c5a-8166-dec5af417463-logs\") pod \"36deda83-a455-4c5a-8166-dec5af417463\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.004814 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-config-data\") pod \"36deda83-a455-4c5a-8166-dec5af417463\" (UID: \"36deda83-a455-4c5a-8166-dec5af417463\") " Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.035440 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36deda83-a455-4c5a-8166-dec5af417463-logs" (OuterVolumeSpecName: "logs") pod "36deda83-a455-4c5a-8166-dec5af417463" (UID: "36deda83-a455-4c5a-8166-dec5af417463"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.040855 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36deda83-a455-4c5a-8166-dec5af417463-kube-api-access-hcj6w" (OuterVolumeSpecName: "kube-api-access-hcj6w") pod "36deda83-a455-4c5a-8166-dec5af417463" (UID: "36deda83-a455-4c5a-8166-dec5af417463"). InnerVolumeSpecName "kube-api-access-hcj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.066819 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-config-data" (OuterVolumeSpecName: "config-data") pod "36deda83-a455-4c5a-8166-dec5af417463" (UID: "36deda83-a455-4c5a-8166-dec5af417463"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.068764 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36deda83-a455-4c5a-8166-dec5af417463" (UID: "36deda83-a455-4c5a-8166-dec5af417463"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.107185 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.107370 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcj6w\" (UniqueName: \"kubernetes.io/projected/36deda83-a455-4c5a-8166-dec5af417463-kube-api-access-hcj6w\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.107431 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36deda83-a455-4c5a-8166-dec5af417463-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.107522 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36deda83-a455-4c5a-8166-dec5af417463-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.213169 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36deda83-a455-4c5a-8166-dec5af417463","Type":"ContainerDied","Data":"1de8a1189918b512f999e911a8564f9d6c829cc7e2ac1ba8636fed76e6cd8243"} Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.213210 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.213229 4660 scope.go:117] "RemoveContainer" containerID="5b47718376d1ccf864ea1118d0722d0bf988e9f8e0246811c82d7ffe4de28951" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.215725 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dppj9" event={"ID":"460dcc75-2002-47dd-a296-c8ca3f25e039","Type":"ContainerStarted","Data":"4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738"} Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.264139 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.275267 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.497005 4660 scope.go:117] "RemoveContainer" containerID="405af9538d7326442610622454cafd3506f3ba1e5131102f6a801fa9abca1799" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.517971 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:07 crc kubenswrapper[4660]: E1129 07:43:07.518419 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerName="init" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.518440 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerName="init" Nov 29 07:43:07 crc kubenswrapper[4660]: E1129 07:43:07.518449 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-metadata" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.518456 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-metadata" Nov 29 07:43:07 crc kubenswrapper[4660]: E1129 07:43:07.518475 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerName="dnsmasq-dns" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.518482 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerName="dnsmasq-dns" Nov 29 07:43:07 crc kubenswrapper[4660]: E1129 07:43:07.518495 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-log" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.518500 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-log" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.518708 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cded14-8a67-4297-b354-a7ed6aa91e74" containerName="dnsmasq-dns" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.518727 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-metadata" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.518742 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36deda83-a455-4c5a-8166-dec5af417463" containerName="nova-metadata-log" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.523865 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.527255 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.527585 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.617788 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.618043 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.618104 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/fb81125c-79d8-490f-862e-c340459824f1-kube-api-access-z4n5d\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.618159 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-config-data\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.618185 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb81125c-79d8-490f-862e-c340459824f1-logs\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.640335 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.710283 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36deda83-a455-4c5a-8166-dec5af417463" path="/var/lib/kubelet/pods/36deda83-a455-4c5a-8166-dec5af417463/volumes" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.719585 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.719693 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/fb81125c-79d8-490f-862e-c340459824f1-kube-api-access-z4n5d\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.719753 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-config-data\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.719783 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb81125c-79d8-490f-862e-c340459824f1-logs\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.719827 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.720338 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb81125c-79d8-490f-862e-c340459824f1-logs\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.746540 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.758312 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.760919 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/fb81125c-79d8-490f-862e-c340459824f1-kube-api-access-z4n5d\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.764443 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-config-data\") pod \"nova-metadata-0\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " pod="openstack/nova-metadata-0" Nov 29 07:43:07 crc kubenswrapper[4660]: I1129 07:43:07.852134 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.014427 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.144220 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4ct9\" (UniqueName: \"kubernetes.io/projected/346088fe-bc54-45fa-95b1-7264614a2988-kube-api-access-k4ct9\") pod \"346088fe-bc54-45fa-95b1-7264614a2988\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.144714 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-combined-ca-bundle\") pod \"346088fe-bc54-45fa-95b1-7264614a2988\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.144806 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-config-data\") pod \"346088fe-bc54-45fa-95b1-7264614a2988\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.144864 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-scripts\") pod \"346088fe-bc54-45fa-95b1-7264614a2988\" (UID: \"346088fe-bc54-45fa-95b1-7264614a2988\") " Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.148383 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-scripts" (OuterVolumeSpecName: "scripts") pod "346088fe-bc54-45fa-95b1-7264614a2988" (UID: "346088fe-bc54-45fa-95b1-7264614a2988"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.159986 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346088fe-bc54-45fa-95b1-7264614a2988-kube-api-access-k4ct9" (OuterVolumeSpecName: "kube-api-access-k4ct9") pod "346088fe-bc54-45fa-95b1-7264614a2988" (UID: "346088fe-bc54-45fa-95b1-7264614a2988"). InnerVolumeSpecName "kube-api-access-k4ct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.180808 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "346088fe-bc54-45fa-95b1-7264614a2988" (UID: "346088fe-bc54-45fa-95b1-7264614a2988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.191577 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-config-data" (OuterVolumeSpecName: "config-data") pod "346088fe-bc54-45fa-95b1-7264614a2988" (UID: "346088fe-bc54-45fa-95b1-7264614a2988"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.230083 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bgj57" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.234603 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bgj57" event={"ID":"346088fe-bc54-45fa-95b1-7264614a2988","Type":"ContainerDied","Data":"04916873d4dee6573056c6e7e16b04e4eb39f7cdf924a51f60036a5b241c73ab"} Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.234652 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04916873d4dee6573056c6e7e16b04e4eb39f7cdf924a51f60036a5b241c73ab" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.246990 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.247993 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.248011 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/346088fe-bc54-45fa-95b1-7264614a2988-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.248023 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4ct9\" (UniqueName: \"kubernetes.io/projected/346088fe-bc54-45fa-95b1-7264614a2988-kube-api-access-k4ct9\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.350538 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:08 crc kubenswrapper[4660]: W1129 07:43:08.359835 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb81125c_79d8_490f_862e_c340459824f1.slice/crio-c0df0a6a9b72ac0080a17a4df32f16333849ad600c690e6d650f2df7bb0d5e6f WatchSource:0}: Error finding container c0df0a6a9b72ac0080a17a4df32f16333849ad600c690e6d650f2df7bb0d5e6f: Status 404 returned error can't find the container with id c0df0a6a9b72ac0080a17a4df32f16333849ad600c690e6d650f2df7bb0d5e6f Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.423203 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.423437 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-log" containerID="cri-o://2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e" gracePeriod=30 Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.424133 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-api" containerID="cri-o://8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989" gracePeriod=30 Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.454924 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.455139 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b1955589-00bc-4c74-9f66-e1a37e5e245d" containerName="nova-scheduler-scheduler" containerID="cri-o://665a409c77be3609c61bac2565451379ec8d1987565cf6667cfd10ba748a48f7" gracePeriod=30 Nov 29 07:43:08 crc kubenswrapper[4660]: I1129 07:43:08.477121 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:09 crc kubenswrapper[4660]: I1129 07:43:09.241054 4660 generic.go:334] "Generic (PLEG): container finished" podID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerID="4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738" exitCode=0 Nov 29 07:43:09 crc kubenswrapper[4660]: I1129 07:43:09.241125 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dppj9" event={"ID":"460dcc75-2002-47dd-a296-c8ca3f25e039","Type":"ContainerDied","Data":"4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738"} Nov 29 07:43:09 crc kubenswrapper[4660]: I1129 07:43:09.244098 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb81125c-79d8-490f-862e-c340459824f1","Type":"ContainerStarted","Data":"c0df0a6a9b72ac0080a17a4df32f16333849ad600c690e6d650f2df7bb0d5e6f"} Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.246676 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.254051 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb81125c-79d8-490f-862e-c340459824f1","Type":"ContainerStarted","Data":"57ec6f30e7601756c82f0fb9856487749ba632f878d1509105e665e7b4749576"} Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.254123 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb81125c-79d8-490f-862e-c340459824f1","Type":"ContainerStarted","Data":"6a48a2a840357c926a190ff2c2e505f95407335c01524f93f7b0c1b28e1cdcc3"} Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.254282 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-log" containerID="cri-o://6a48a2a840357c926a190ff2c2e505f95407335c01524f93f7b0c1b28e1cdcc3" gracePeriod=30 Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.254688 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-metadata" containerID="cri-o://57ec6f30e7601756c82f0fb9856487749ba632f878d1509105e665e7b4749576" gracePeriod=30 Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.259221 4660 generic.go:334] "Generic (PLEG): container finished" podID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerID="8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989" exitCode=0 Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.259265 4660 generic.go:334] "Generic (PLEG): container finished" podID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerID="2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e" exitCode=143 Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.259284 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1","Type":"ContainerDied","Data":"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989"} Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.259306 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1","Type":"ContainerDied","Data":"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e"} Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.259345 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1","Type":"ContainerDied","Data":"3ab37e15643c93ce6a16055cc91d9350e1a83ac62e09fc0698c3ce43fbc68e05"} Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.259364 4660 scope.go:117] "RemoveContainer" containerID="8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.259523 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.289498 4660 scope.go:117] "RemoveContainer" containerID="2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.300368 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6t7k\" (UniqueName: \"kubernetes.io/projected/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-kube-api-access-z6t7k\") pod \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.300413 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-config-data\") pod \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.300512 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-logs\") pod \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.300635 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-combined-ca-bundle\") pod \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\" (UID: \"ab6efa75-ed87-4513-bfa9-d53c1b5c75c1\") " Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.302140 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-logs" (OuterVolumeSpecName: "logs") pod "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" (UID: "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.304947 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.304837219 podStartE2EDuration="3.304837219s" podCreationTimestamp="2025-11-29 07:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:10.294568107 +0000 UTC m=+1680.848098006" watchObservedRunningTime="2025-11-29 07:43:10.304837219 +0000 UTC m=+1680.858367118" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.310079 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-kube-api-access-z6t7k" (OuterVolumeSpecName: "kube-api-access-z6t7k") pod "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" (UID: "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1"). InnerVolumeSpecName "kube-api-access-z6t7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.348995 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-config-data" (OuterVolumeSpecName: "config-data") pod "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" (UID: "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.375131 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" (UID: "ab6efa75-ed87-4513-bfa9-d53c1b5c75c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.380448 4660 scope.go:117] "RemoveContainer" containerID="8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989" Nov 29 07:43:10 crc kubenswrapper[4660]: E1129 07:43:10.381120 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989\": container with ID starting with 8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989 not found: ID does not exist" containerID="8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.381162 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989"} err="failed to get container status \"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989\": rpc error: code = NotFound desc = could not find container \"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989\": container with ID starting with 8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989 not found: ID does not exist" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.381186 4660 scope.go:117] "RemoveContainer" containerID="2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e" Nov 29 07:43:10 crc kubenswrapper[4660]: E1129 07:43:10.382155 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e\": container with ID starting with 2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e not found: ID does not exist" containerID="2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.382180 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e"} err="failed to get container status \"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e\": rpc error: code = NotFound desc = could not find container \"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e\": container with ID starting with 2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e not found: ID does not exist" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.382194 4660 scope.go:117] "RemoveContainer" containerID="8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.382380 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989"} err="failed to get container status \"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989\": rpc error: code = NotFound desc = could not find container \"8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989\": container with ID starting with 8f23897e3bdcb4bd527a35a48b33efbb27f44dabdbe2289e242edf923539a989 not found: ID does not exist" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.382419 4660 scope.go:117] "RemoveContainer" containerID="2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.382750 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e"} err="failed to get container status \"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e\": rpc error: code = NotFound desc = could not find container \"2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e\": container with ID starting with 2c9d16776438c207539324b5e59e44d1c82faa68197f2904b3a9dd548aaf6c3e not found: ID does not exist" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.403142 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.403179 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6t7k\" (UniqueName: \"kubernetes.io/projected/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-kube-api-access-z6t7k\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.403191 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.403199 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.611739 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.628681 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.647153 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:10 crc kubenswrapper[4660]: E1129 07:43:10.647634 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-log" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.647652 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-log" Nov 29 07:43:10 crc kubenswrapper[4660]: E1129 07:43:10.647664 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-api" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.647669 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-api" Nov 29 07:43:10 crc kubenswrapper[4660]: E1129 07:43:10.647697 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346088fe-bc54-45fa-95b1-7264614a2988" containerName="nova-manage" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.647703 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="346088fe-bc54-45fa-95b1-7264614a2988" containerName="nova-manage" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.647871 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-log" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.647880 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="346088fe-bc54-45fa-95b1-7264614a2988" containerName="nova-manage" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.647897 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" containerName="nova-api-api" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.648846 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.651892 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.661144 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.707950 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5g79\" (UniqueName: \"kubernetes.io/projected/32036f0b-e420-4104-9560-38b2516339ba-kube-api-access-k5g79\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.708241 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-config-data\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.708369 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.708506 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32036f0b-e420-4104-9560-38b2516339ba-logs\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.810555 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-config-data\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.810930 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.811038 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32036f0b-e420-4104-9560-38b2516339ba-logs\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.811178 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5g79\" (UniqueName: \"kubernetes.io/projected/32036f0b-e420-4104-9560-38b2516339ba-kube-api-access-k5g79\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.813049 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32036f0b-e420-4104-9560-38b2516339ba-logs\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.815356 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.815418 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-config-data\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.827768 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5g79\" (UniqueName: \"kubernetes.io/projected/32036f0b-e420-4104-9560-38b2516339ba-kube-api-access-k5g79\") pod \"nova-api-0\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " pod="openstack/nova-api-0" Nov 29 07:43:10 crc kubenswrapper[4660]: I1129 07:43:10.966352 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.277490 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dppj9" event={"ID":"460dcc75-2002-47dd-a296-c8ca3f25e039","Type":"ContainerStarted","Data":"5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d"} Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.282170 4660 generic.go:334] "Generic (PLEG): container finished" podID="fb81125c-79d8-490f-862e-c340459824f1" containerID="57ec6f30e7601756c82f0fb9856487749ba632f878d1509105e665e7b4749576" exitCode=0 Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.282202 4660 generic.go:334] "Generic (PLEG): container finished" podID="fb81125c-79d8-490f-862e-c340459824f1" containerID="6a48a2a840357c926a190ff2c2e505f95407335c01524f93f7b0c1b28e1cdcc3" exitCode=143 Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.282260 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb81125c-79d8-490f-862e-c340459824f1","Type":"ContainerDied","Data":"57ec6f30e7601756c82f0fb9856487749ba632f878d1509105e665e7b4749576"} Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.282291 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb81125c-79d8-490f-862e-c340459824f1","Type":"ContainerDied","Data":"6a48a2a840357c926a190ff2c2e505f95407335c01524f93f7b0c1b28e1cdcc3"} Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.330515 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dppj9" podStartSLOduration=5.539905942 podStartE2EDuration="10.330492015s" podCreationTimestamp="2025-11-29 07:43:01 +0000 UTC" firstStartedPulling="2025-11-29 07:43:05.156435028 +0000 UTC m=+1675.709964927" lastFinishedPulling="2025-11-29 07:43:09.947021101 +0000 UTC m=+1680.500551000" observedRunningTime="2025-11-29 07:43:11.308944707 +0000 UTC m=+1681.862474606" watchObservedRunningTime="2025-11-29 07:43:11.330492015 +0000 UTC m=+1681.884021924" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.423868 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.529638 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb81125c-79d8-490f-862e-c340459824f1-logs\") pod \"fb81125c-79d8-490f-862e-c340459824f1\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.529845 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/fb81125c-79d8-490f-862e-c340459824f1-kube-api-access-z4n5d\") pod \"fb81125c-79d8-490f-862e-c340459824f1\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.529986 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-combined-ca-bundle\") pod \"fb81125c-79d8-490f-862e-c340459824f1\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.530029 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-config-data\") pod \"fb81125c-79d8-490f-862e-c340459824f1\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.530069 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-nova-metadata-tls-certs\") pod \"fb81125c-79d8-490f-862e-c340459824f1\" (UID: \"fb81125c-79d8-490f-862e-c340459824f1\") " Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.530219 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb81125c-79d8-490f-862e-c340459824f1-logs" (OuterVolumeSpecName: "logs") pod "fb81125c-79d8-490f-862e-c340459824f1" (UID: "fb81125c-79d8-490f-862e-c340459824f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.530559 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb81125c-79d8-490f-862e-c340459824f1-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.545289 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb81125c-79d8-490f-862e-c340459824f1-kube-api-access-z4n5d" (OuterVolumeSpecName: "kube-api-access-z4n5d") pod "fb81125c-79d8-490f-862e-c340459824f1" (UID: "fb81125c-79d8-490f-862e-c340459824f1"). InnerVolumeSpecName "kube-api-access-z4n5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.559941 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb81125c-79d8-490f-862e-c340459824f1" (UID: "fb81125c-79d8-490f-862e-c340459824f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.570763 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-config-data" (OuterVolumeSpecName: "config-data") pod "fb81125c-79d8-490f-862e-c340459824f1" (UID: "fb81125c-79d8-490f-862e-c340459824f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.582941 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fb81125c-79d8-490f-862e-c340459824f1" (UID: "fb81125c-79d8-490f-862e-c340459824f1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:11 crc kubenswrapper[4660]: W1129 07:43:11.606240 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32036f0b_e420_4104_9560_38b2516339ba.slice/crio-1363ab53187e317a48c9611056675c5ee42d927b42b527dba2c32a0983789f21 WatchSource:0}: Error finding container 1363ab53187e317a48c9611056675c5ee42d927b42b527dba2c32a0983789f21: Status 404 returned error can't find the container with id 1363ab53187e317a48c9611056675c5ee42d927b42b527dba2c32a0983789f21 Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.607933 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.631876 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/fb81125c-79d8-490f-862e-c340459824f1-kube-api-access-z4n5d\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.631909 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.631920 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.631930 4660 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb81125c-79d8-490f-862e-c340459824f1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:11 crc kubenswrapper[4660]: I1129 07:43:11.705333 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6efa75-ed87-4513-bfa9-d53c1b5c75c1" path="/var/lib/kubelet/pods/ab6efa75-ed87-4513-bfa9-d53c1b5c75c1/volumes" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.125217 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.125920 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.304144 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32036f0b-e420-4104-9560-38b2516339ba","Type":"ContainerStarted","Data":"6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94"} Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.304475 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32036f0b-e420-4104-9560-38b2516339ba","Type":"ContainerStarted","Data":"30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2"} Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.304491 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32036f0b-e420-4104-9560-38b2516339ba","Type":"ContainerStarted","Data":"1363ab53187e317a48c9611056675c5ee42d927b42b527dba2c32a0983789f21"} Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.314592 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.314588 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb81125c-79d8-490f-862e-c340459824f1","Type":"ContainerDied","Data":"c0df0a6a9b72ac0080a17a4df32f16333849ad600c690e6d650f2df7bb0d5e6f"} Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.314810 4660 scope.go:117] "RemoveContainer" containerID="57ec6f30e7601756c82f0fb9856487749ba632f878d1509105e665e7b4749576" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.331957 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.331936874 podStartE2EDuration="2.331936874s" podCreationTimestamp="2025-11-29 07:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:12.325779708 +0000 UTC m=+1682.879309617" watchObservedRunningTime="2025-11-29 07:43:12.331936874 +0000 UTC m=+1682.885466773" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.341872 4660 scope.go:117] "RemoveContainer" containerID="6a48a2a840357c926a190ff2c2e505f95407335c01524f93f7b0c1b28e1cdcc3" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.365961 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.406666 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.456832 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:12 crc kubenswrapper[4660]: E1129 07:43:12.458734 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-log" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.458759 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-log" Nov 29 07:43:12 crc kubenswrapper[4660]: E1129 07:43:12.458818 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-metadata" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.458826 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-metadata" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.465953 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-metadata" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.466018 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb81125c-79d8-490f-862e-c340459824f1" containerName="nova-metadata-log" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.467563 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.479047 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.479240 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.482849 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.549670 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqv6t\" (UniqueName: \"kubernetes.io/projected/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-kube-api-access-dqv6t\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.549801 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-config-data\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.549845 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.549903 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-logs\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.549920 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.653784 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-config-data\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.654138 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.654259 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-logs\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.654374 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.654591 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqv6t\" (UniqueName: \"kubernetes.io/projected/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-kube-api-access-dqv6t\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.655322 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-logs\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.660114 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.660334 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.669755 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-config-data\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.682919 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqv6t\" (UniqueName: \"kubernetes.io/projected/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-kube-api-access-dqv6t\") pod \"nova-metadata-0\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " pod="openstack/nova-metadata-0" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.693382 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:43:12 crc kubenswrapper[4660]: E1129 07:43:12.693741 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:43:12 crc kubenswrapper[4660]: I1129 07:43:12.798545 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:43:13 crc kubenswrapper[4660]: I1129 07:43:13.104018 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:13 crc kubenswrapper[4660]: I1129 07:43:13.186750 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dppj9" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="registry-server" probeResult="failure" output=< Nov 29 07:43:13 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:43:13 crc kubenswrapper[4660]: > Nov 29 07:43:13 crc kubenswrapper[4660]: I1129 07:43:13.326089 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c","Type":"ContainerStarted","Data":"70ae4c1aaf031f7cb84aa05dbfdc9eb1b3392abbbb4aad4d34a09138dbf395f7"} Nov 29 07:43:13 crc kubenswrapper[4660]: I1129 07:43:13.702309 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb81125c-79d8-490f-862e-c340459824f1" path="/var/lib/kubelet/pods/fb81125c-79d8-490f-862e-c340459824f1/volumes" Nov 29 07:43:14 crc kubenswrapper[4660]: I1129 07:43:14.353975 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c","Type":"ContainerStarted","Data":"2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4"} Nov 29 07:43:14 crc kubenswrapper[4660]: I1129 07:43:14.354261 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c","Type":"ContainerStarted","Data":"71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1"} Nov 29 07:43:14 crc kubenswrapper[4660]: I1129 07:43:14.379679 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.379651376 podStartE2EDuration="2.379651376s" podCreationTimestamp="2025-11-29 07:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:14.3731601 +0000 UTC m=+1684.926690019" watchObservedRunningTime="2025-11-29 07:43:14.379651376 +0000 UTC m=+1684.933181285" Nov 29 07:43:14 crc kubenswrapper[4660]: I1129 07:43:14.941683 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5rr5"] Nov 29 07:43:14 crc kubenswrapper[4660]: I1129 07:43:14.944105 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:14 crc kubenswrapper[4660]: I1129 07:43:14.952243 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5rr5"] Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.000688 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8p5\" (UniqueName: \"kubernetes.io/projected/aef12fa5-5d60-4ada-bb60-01a96ed61c48-kube-api-access-9r8p5\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.000766 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-utilities\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.000868 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-catalog-content\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.103226 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8p5\" (UniqueName: \"kubernetes.io/projected/aef12fa5-5d60-4ada-bb60-01a96ed61c48-kube-api-access-9r8p5\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.103762 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-utilities\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.103984 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-catalog-content\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.104374 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-catalog-content\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.104440 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-utilities\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.122399 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8p5\" (UniqueName: \"kubernetes.io/projected/aef12fa5-5d60-4ada-bb60-01a96ed61c48-kube-api-access-9r8p5\") pod \"community-operators-k5rr5\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.270938 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:15 crc kubenswrapper[4660]: I1129 07:43:15.780128 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5rr5"] Nov 29 07:43:16 crc kubenswrapper[4660]: I1129 07:43:16.397858 4660 generic.go:334] "Generic (PLEG): container finished" podID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerID="4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439" exitCode=0 Nov 29 07:43:16 crc kubenswrapper[4660]: I1129 07:43:16.397988 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5rr5" event={"ID":"aef12fa5-5d60-4ada-bb60-01a96ed61c48","Type":"ContainerDied","Data":"4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439"} Nov 29 07:43:16 crc kubenswrapper[4660]: I1129 07:43:16.398190 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5rr5" event={"ID":"aef12fa5-5d60-4ada-bb60-01a96ed61c48","Type":"ContainerStarted","Data":"77cd5069b2d556b4eefc7e83f400201214d667d89bb27b91d43102f1b7fa900b"} Nov 29 07:43:17 crc kubenswrapper[4660]: I1129 07:43:17.410130 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5rr5" event={"ID":"aef12fa5-5d60-4ada-bb60-01a96ed61c48","Type":"ContainerStarted","Data":"13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9"} Nov 29 07:43:17 crc kubenswrapper[4660]: I1129 07:43:17.413389 4660 generic.go:334] "Generic (PLEG): container finished" podID="7e208f39-0b45-484b-9bfb-9b0747126b84" containerID="2025550a58bb24ceca22edb6c65cc03c22fefc72a454efba09b009650479bb20" exitCode=0 Nov 29 07:43:17 crc kubenswrapper[4660]: I1129 07:43:17.413430 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pbghl" event={"ID":"7e208f39-0b45-484b-9bfb-9b0747126b84","Type":"ContainerDied","Data":"2025550a58bb24ceca22edb6c65cc03c22fefc72a454efba09b009650479bb20"} Nov 29 07:43:17 crc kubenswrapper[4660]: I1129 07:43:17.798791 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:43:17 crc kubenswrapper[4660]: I1129 07:43:17.798850 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.388157 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.453697 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pbghl" event={"ID":"7e208f39-0b45-484b-9bfb-9b0747126b84","Type":"ContainerDied","Data":"0774acf1299cfb1d978c45ab64fd71270ce436b5d4f8aa9294426ec7d8911a83"} Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.453736 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0774acf1299cfb1d978c45ab64fd71270ce436b5d4f8aa9294426ec7d8911a83" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.453784 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pbghl" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.455400 4660 generic.go:334] "Generic (PLEG): container finished" podID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerID="13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9" exitCode=0 Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.455443 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5rr5" event={"ID":"aef12fa5-5d60-4ada-bb60-01a96ed61c48","Type":"ContainerDied","Data":"13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9"} Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.482245 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-config-data\") pod \"7e208f39-0b45-484b-9bfb-9b0747126b84\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.482443 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-combined-ca-bundle\") pod \"7e208f39-0b45-484b-9bfb-9b0747126b84\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.482479 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-scripts\") pod \"7e208f39-0b45-484b-9bfb-9b0747126b84\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.482538 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt466\" (UniqueName: \"kubernetes.io/projected/7e208f39-0b45-484b-9bfb-9b0747126b84-kube-api-access-tt466\") pod \"7e208f39-0b45-484b-9bfb-9b0747126b84\" (UID: \"7e208f39-0b45-484b-9bfb-9b0747126b84\") " Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.524000 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-scripts" (OuterVolumeSpecName: "scripts") pod "7e208f39-0b45-484b-9bfb-9b0747126b84" (UID: "7e208f39-0b45-484b-9bfb-9b0747126b84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.524165 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e208f39-0b45-484b-9bfb-9b0747126b84-kube-api-access-tt466" (OuterVolumeSpecName: "kube-api-access-tt466") pod "7e208f39-0b45-484b-9bfb-9b0747126b84" (UID: "7e208f39-0b45-484b-9bfb-9b0747126b84"). InnerVolumeSpecName "kube-api-access-tt466". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.534831 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e208f39-0b45-484b-9bfb-9b0747126b84" (UID: "7e208f39-0b45-484b-9bfb-9b0747126b84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.559536 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-config-data" (OuterVolumeSpecName: "config-data") pod "7e208f39-0b45-484b-9bfb-9b0747126b84" (UID: "7e208f39-0b45-484b-9bfb-9b0747126b84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.571567 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:43:19 crc kubenswrapper[4660]: E1129 07:43:19.572030 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e208f39-0b45-484b-9bfb-9b0747126b84" containerName="nova-cell1-conductor-db-sync" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.572047 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e208f39-0b45-484b-9bfb-9b0747126b84" containerName="nova-cell1-conductor-db-sync" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.572283 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e208f39-0b45-484b-9bfb-9b0747126b84" containerName="nova-cell1-conductor-db-sync" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.572911 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.584168 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt466\" (UniqueName: \"kubernetes.io/projected/7e208f39-0b45-484b-9bfb-9b0747126b84-kube-api-access-tt466\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.584197 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.584206 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.584214 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e208f39-0b45-484b-9bfb-9b0747126b84-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.593595 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.686168 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6tmm\" (UniqueName: \"kubernetes.io/projected/6933c9c1-60f6-4099-982d-22b279546662-kube-api-access-g6tmm\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.686222 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6933c9c1-60f6-4099-982d-22b279546662-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.686288 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6933c9c1-60f6-4099-982d-22b279546662-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.787548 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6tmm\" (UniqueName: \"kubernetes.io/projected/6933c9c1-60f6-4099-982d-22b279546662-kube-api-access-g6tmm\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.787605 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6933c9c1-60f6-4099-982d-22b279546662-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.787692 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6933c9c1-60f6-4099-982d-22b279546662-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.796288 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6933c9c1-60f6-4099-982d-22b279546662-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.796398 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6933c9c1-60f6-4099-982d-22b279546662-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.813494 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6tmm\" (UniqueName: \"kubernetes.io/projected/6933c9c1-60f6-4099-982d-22b279546662-kube-api-access-g6tmm\") pod \"nova-cell1-conductor-0\" (UID: \"6933c9c1-60f6-4099-982d-22b279546662\") " pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:19 crc kubenswrapper[4660]: I1129 07:43:19.949520 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:20 crc kubenswrapper[4660]: I1129 07:43:20.417235 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 29 07:43:20 crc kubenswrapper[4660]: I1129 07:43:20.480691 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6933c9c1-60f6-4099-982d-22b279546662","Type":"ContainerStarted","Data":"14c15519cb896e47f0cd5bcd28188522cb4e211a44651e32fcaeaf44b2c4bc28"} Nov 29 07:43:20 crc kubenswrapper[4660]: I1129 07:43:20.490004 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5rr5" event={"ID":"aef12fa5-5d60-4ada-bb60-01a96ed61c48","Type":"ContainerStarted","Data":"c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274"} Nov 29 07:43:20 crc kubenswrapper[4660]: I1129 07:43:20.524626 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5rr5" podStartSLOduration=2.9533194739999997 podStartE2EDuration="6.524587504s" podCreationTimestamp="2025-11-29 07:43:14 +0000 UTC" firstStartedPulling="2025-11-29 07:43:16.400314678 +0000 UTC m=+1686.953844577" lastFinishedPulling="2025-11-29 07:43:19.971582708 +0000 UTC m=+1690.525112607" observedRunningTime="2025-11-29 07:43:20.522989373 +0000 UTC m=+1691.076519282" watchObservedRunningTime="2025-11-29 07:43:20.524587504 +0000 UTC m=+1691.078117393" Nov 29 07:43:20 crc kubenswrapper[4660]: I1129 07:43:20.967459 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:43:20 crc kubenswrapper[4660]: I1129 07:43:20.967806 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:43:21 crc kubenswrapper[4660]: I1129 07:43:21.501831 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6933c9c1-60f6-4099-982d-22b279546662","Type":"ContainerStarted","Data":"a4d1de25da0706da3d7defa9fd511d0d3de396dd199207c4c52210bdcc118f15"} Nov 29 07:43:21 crc kubenswrapper[4660]: I1129 07:43:21.503010 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:22 crc kubenswrapper[4660]: I1129 07:43:22.050964 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:43:22 crc kubenswrapper[4660]: I1129 07:43:22.051076 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:43:22 crc kubenswrapper[4660]: I1129 07:43:22.240041 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:22 crc kubenswrapper[4660]: I1129 07:43:22.294797 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.29478159 podStartE2EDuration="3.29478159s" podCreationTimestamp="2025-11-29 07:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:21.528082745 +0000 UTC m=+1692.081612634" watchObservedRunningTime="2025-11-29 07:43:22.29478159 +0000 UTC m=+1692.848311489" Nov 29 07:43:22 crc kubenswrapper[4660]: I1129 07:43:22.395700 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:22 crc kubenswrapper[4660]: I1129 07:43:22.799133 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:43:22 crc kubenswrapper[4660]: I1129 07:43:22.799202 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:43:23 crc kubenswrapper[4660]: I1129 07:43:23.322213 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dppj9"] Nov 29 07:43:23 crc kubenswrapper[4660]: I1129 07:43:23.518985 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dppj9" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="registry-server" containerID="cri-o://5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d" gracePeriod=2 Nov 29 07:43:23 crc kubenswrapper[4660]: I1129 07:43:23.810754 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:43:23 crc kubenswrapper[4660]: I1129 07:43:23.810754 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.062379 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.090020 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pl7g\" (UniqueName: \"kubernetes.io/projected/460dcc75-2002-47dd-a296-c8ca3f25e039-kube-api-access-6pl7g\") pod \"460dcc75-2002-47dd-a296-c8ca3f25e039\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.090073 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-catalog-content\") pod \"460dcc75-2002-47dd-a296-c8ca3f25e039\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.090336 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-utilities\") pod \"460dcc75-2002-47dd-a296-c8ca3f25e039\" (UID: \"460dcc75-2002-47dd-a296-c8ca3f25e039\") " Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.091492 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-utilities" (OuterVolumeSpecName: "utilities") pod "460dcc75-2002-47dd-a296-c8ca3f25e039" (UID: "460dcc75-2002-47dd-a296-c8ca3f25e039"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.110006 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460dcc75-2002-47dd-a296-c8ca3f25e039-kube-api-access-6pl7g" (OuterVolumeSpecName: "kube-api-access-6pl7g") pod "460dcc75-2002-47dd-a296-c8ca3f25e039" (UID: "460dcc75-2002-47dd-a296-c8ca3f25e039"). InnerVolumeSpecName "kube-api-access-6pl7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.159231 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "460dcc75-2002-47dd-a296-c8ca3f25e039" (UID: "460dcc75-2002-47dd-a296-c8ca3f25e039"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.192419 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.192468 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pl7g\" (UniqueName: \"kubernetes.io/projected/460dcc75-2002-47dd-a296-c8ca3f25e039-kube-api-access-6pl7g\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.192482 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460dcc75-2002-47dd-a296-c8ca3f25e039-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.530328 4660 generic.go:334] "Generic (PLEG): container finished" podID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerID="5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d" exitCode=0 Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.530541 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dppj9" event={"ID":"460dcc75-2002-47dd-a296-c8ca3f25e039","Type":"ContainerDied","Data":"5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d"} Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.530718 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dppj9" event={"ID":"460dcc75-2002-47dd-a296-c8ca3f25e039","Type":"ContainerDied","Data":"322ff2ff5eca53b2d2f0bbdb5fb76ac90cca500711989b3f809865c7cbbd12a9"} Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.530742 4660 scope.go:117] "RemoveContainer" containerID="5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.530647 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dppj9" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.562624 4660 scope.go:117] "RemoveContainer" containerID="4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.599630 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dppj9"] Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.626219 4660 scope.go:117] "RemoveContainer" containerID="a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.632236 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dppj9"] Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.657424 4660 scope.go:117] "RemoveContainer" containerID="5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d" Nov 29 07:43:24 crc kubenswrapper[4660]: E1129 07:43:24.659745 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d\": container with ID starting with 5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d not found: ID does not exist" containerID="5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.659777 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d"} err="failed to get container status \"5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d\": rpc error: code = NotFound desc = could not find container \"5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d\": container with ID starting with 5b8574f850698b3675c721876edc4325eb2df8cb07c4939a4855dfe798a4661d not found: ID does not exist" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.659800 4660 scope.go:117] "RemoveContainer" containerID="4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738" Nov 29 07:43:24 crc kubenswrapper[4660]: E1129 07:43:24.661008 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738\": container with ID starting with 4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738 not found: ID does not exist" containerID="4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.661031 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738"} err="failed to get container status \"4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738\": rpc error: code = NotFound desc = could not find container \"4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738\": container with ID starting with 4bc5e85ce4c8aef61144175a88e5320df1d6927cc56dc7a7ab2460715872d738 not found: ID does not exist" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.661044 4660 scope.go:117] "RemoveContainer" containerID="a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1" Nov 29 07:43:24 crc kubenswrapper[4660]: E1129 07:43:24.661376 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1\": container with ID starting with a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1 not found: ID does not exist" containerID="a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.661424 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1"} err="failed to get container status \"a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1\": rpc error: code = NotFound desc = could not find container \"a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1\": container with ID starting with a14658c3a33e054dc4a8283172ddd7f850c20c2a835b762299bd970c1d259cf1 not found: ID does not exist" Nov 29 07:43:24 crc kubenswrapper[4660]: I1129 07:43:24.693943 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:43:24 crc kubenswrapper[4660]: E1129 07:43:24.694197 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:43:25 crc kubenswrapper[4660]: I1129 07:43:25.271103 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:25 crc kubenswrapper[4660]: I1129 07:43:25.271153 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:25 crc kubenswrapper[4660]: I1129 07:43:25.704425 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" path="/var/lib/kubelet/pods/460dcc75-2002-47dd-a296-c8ca3f25e039/volumes" Nov 29 07:43:26 crc kubenswrapper[4660]: I1129 07:43:26.322439 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k5rr5" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="registry-server" probeResult="failure" output=< Nov 29 07:43:26 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:43:26 crc kubenswrapper[4660]: > Nov 29 07:43:29 crc kubenswrapper[4660]: I1129 07:43:29.981013 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 29 07:43:30 crc kubenswrapper[4660]: I1129 07:43:30.976126 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:43:30 crc kubenswrapper[4660]: I1129 07:43:30.976826 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:43:30 crc kubenswrapper[4660]: I1129 07:43:30.977718 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:43:30 crc kubenswrapper[4660]: I1129 07:43:30.997455 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.604195 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.607542 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.827679 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-d7c6d"] Nov 29 07:43:31 crc kubenswrapper[4660]: E1129 07:43:31.828055 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="extract-content" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.828073 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="extract-content" Nov 29 07:43:31 crc kubenswrapper[4660]: E1129 07:43:31.828089 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="registry-server" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.828096 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="registry-server" Nov 29 07:43:31 crc kubenswrapper[4660]: E1129 07:43:31.828112 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="extract-utilities" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.828119 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="extract-utilities" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.828277 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="460dcc75-2002-47dd-a296-c8ca3f25e039" containerName="registry-server" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.829279 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.851009 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-d7c6d"] Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.953851 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.954157 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.954214 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.954277 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkfhl\" (UniqueName: \"kubernetes.io/projected/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-kube-api-access-dkfhl\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.954304 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-config\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:31 crc kubenswrapper[4660]: I1129 07:43:31.954325 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.055490 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.055532 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.055586 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.055660 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkfhl\" (UniqueName: \"kubernetes.io/projected/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-kube-api-access-dkfhl\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.055685 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-config\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.056726 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.056762 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.057336 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.057455 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-config\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.057577 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.055706 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.076763 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkfhl\" (UniqueName: \"kubernetes.io/projected/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-kube-api-access-dkfhl\") pod \"dnsmasq-dns-cd5cbd7b9-d7c6d\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.167316 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.677713 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-d7c6d"] Nov 29 07:43:32 crc kubenswrapper[4660]: W1129 07:43:32.689874 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8e107af_f2dc_4882_a0a2_cfb7b1caf4b3.slice/crio-849a7376cf4f5d2a2b9d9868e426882670c86f89b4c601904cee752eae827701 WatchSource:0}: Error finding container 849a7376cf4f5d2a2b9d9868e426882670c86f89b4c601904cee752eae827701: Status 404 returned error can't find the container with id 849a7376cf4f5d2a2b9d9868e426882670c86f89b4c601904cee752eae827701 Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.807589 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.809312 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:43:32 crc kubenswrapper[4660]: I1129 07:43:32.816945 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:43:33 crc kubenswrapper[4660]: I1129 07:43:33.632566 4660 generic.go:334] "Generic (PLEG): container finished" podID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerID="673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67" exitCode=0 Nov 29 07:43:33 crc kubenswrapper[4660]: I1129 07:43:33.634824 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" event={"ID":"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3","Type":"ContainerDied","Data":"673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67"} Nov 29 07:43:33 crc kubenswrapper[4660]: I1129 07:43:33.634866 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" event={"ID":"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3","Type":"ContainerStarted","Data":"849a7376cf4f5d2a2b9d9868e426882670c86f89b4c601904cee752eae827701"} Nov 29 07:43:33 crc kubenswrapper[4660]: I1129 07:43:33.653892 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.366284 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.584133 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.584941 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-central-agent" containerID="cri-o://34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7" gracePeriod=30 Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.585043 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-notification-agent" containerID="cri-o://151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf" gracePeriod=30 Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.585002 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="proxy-httpd" containerID="cri-o://c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21" gracePeriod=30 Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.585002 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="sg-core" containerID="cri-o://2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674" gracePeriod=30 Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.642678 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" event={"ID":"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3","Type":"ContainerStarted","Data":"ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41"} Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.642727 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-api" containerID="cri-o://6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94" gracePeriod=30 Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.642520 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-log" containerID="cri-o://30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2" gracePeriod=30 Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.643188 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:34 crc kubenswrapper[4660]: I1129 07:43:34.666828 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" podStartSLOduration=3.666808656 podStartE2EDuration="3.666808656s" podCreationTimestamp="2025-11-29 07:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:34.663432751 +0000 UTC m=+1705.216962650" watchObservedRunningTime="2025-11-29 07:43:34.666808656 +0000 UTC m=+1705.220338555" Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.319888 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.383418 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.559793 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5rr5"] Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.656115 4660 generic.go:334] "Generic (PLEG): container finished" podID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerID="c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21" exitCode=0 Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.656148 4660 generic.go:334] "Generic (PLEG): container finished" podID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerID="2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674" exitCode=2 Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.656159 4660 generic.go:334] "Generic (PLEG): container finished" podID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerID="34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7" exitCode=0 Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.656205 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerDied","Data":"c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21"} Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.656235 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerDied","Data":"2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674"} Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.656249 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerDied","Data":"34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7"} Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.658859 4660 generic.go:334] "Generic (PLEG): container finished" podID="32036f0b-e420-4104-9560-38b2516339ba" containerID="30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2" exitCode=143 Nov 29 07:43:35 crc kubenswrapper[4660]: I1129 07:43:35.659004 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32036f0b-e420-4104-9560-38b2516339ba","Type":"ContainerDied","Data":"30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2"} Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.082455 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.140009 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-config-data\") pod \"818c3f20-b884-4578-980d-cccc395cbfcb\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.140107 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-combined-ca-bundle\") pod \"818c3f20-b884-4578-980d-cccc395cbfcb\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.140319 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6nzk\" (UniqueName: \"kubernetes.io/projected/818c3f20-b884-4578-980d-cccc395cbfcb-kube-api-access-n6nzk\") pod \"818c3f20-b884-4578-980d-cccc395cbfcb\" (UID: \"818c3f20-b884-4578-980d-cccc395cbfcb\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.162060 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818c3f20-b884-4578-980d-cccc395cbfcb-kube-api-access-n6nzk" (OuterVolumeSpecName: "kube-api-access-n6nzk") pod "818c3f20-b884-4578-980d-cccc395cbfcb" (UID: "818c3f20-b884-4578-980d-cccc395cbfcb"). InnerVolumeSpecName "kube-api-access-n6nzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.183769 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "818c3f20-b884-4578-980d-cccc395cbfcb" (UID: "818c3f20-b884-4578-980d-cccc395cbfcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.189782 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-config-data" (OuterVolumeSpecName: "config-data") pod "818c3f20-b884-4578-980d-cccc395cbfcb" (UID: "818c3f20-b884-4578-980d-cccc395cbfcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.242430 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6nzk\" (UniqueName: \"kubernetes.io/projected/818c3f20-b884-4578-980d-cccc395cbfcb-kube-api-access-n6nzk\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.242464 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.242475 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818c3f20-b884-4578-980d-cccc395cbfcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.580186 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649398 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-combined-ca-bundle\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649459 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-run-httpd\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649526 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-sg-core-conf-yaml\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649546 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-ceilometer-tls-certs\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649589 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-log-httpd\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649663 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-scripts\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649750 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2j4j\" (UniqueName: \"kubernetes.io/projected/36f0935c-a7a6-45af-a5ec-69aeb489ffac-kube-api-access-h2j4j\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.649823 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-config-data\") pod \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\" (UID: \"36f0935c-a7a6-45af-a5ec-69aeb489ffac\") " Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.650868 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.651983 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.656091 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-scripts" (OuterVolumeSpecName: "scripts") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.662249 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f0935c-a7a6-45af-a5ec-69aeb489ffac-kube-api-access-h2j4j" (OuterVolumeSpecName: "kube-api-access-h2j4j") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "kube-api-access-h2j4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.682030 4660 generic.go:334] "Generic (PLEG): container finished" podID="818c3f20-b884-4578-980d-cccc395cbfcb" containerID="a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd" exitCode=137 Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.682351 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"818c3f20-b884-4578-980d-cccc395cbfcb","Type":"ContainerDied","Data":"a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd"} Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.682481 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"818c3f20-b884-4578-980d-cccc395cbfcb","Type":"ContainerDied","Data":"35e700af30e981105d3f8073cb1d9eb27cc5b4e1a1f56b3de55db6571fd3a744"} Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.682555 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.682578 4660 scope.go:117] "RemoveContainer" containerID="a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.687648 4660 generic.go:334] "Generic (PLEG): container finished" podID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerID="151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf" exitCode=0 Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.687885 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5rr5" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="registry-server" containerID="cri-o://c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274" gracePeriod=2 Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.688217 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.688971 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerDied","Data":"151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf"} Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.689007 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36f0935c-a7a6-45af-a5ec-69aeb489ffac","Type":"ContainerDied","Data":"fa452aacf82beb9f9e76a22bbcf8386b8bb65fc36883c2a72705be9b284e9937"} Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.694046 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.694496 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.720755 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.731050 4660 scope.go:117] "RemoveContainer" containerID="a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.735178 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd\": container with ID starting with a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd not found: ID does not exist" containerID="a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.735247 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd"} err="failed to get container status \"a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd\": rpc error: code = NotFound desc = could not find container \"a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd\": container with ID starting with a685e252a74fbed0df58270b9be9a48d8cb240c7f79f98d665008f87a382f2bd not found: ID does not exist" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.735304 4660 scope.go:117] "RemoveContainer" containerID="c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.743974 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.752309 4660 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.752339 4660 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.752352 4660 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.752361 4660 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36f0935c-a7a6-45af-a5ec-69aeb489ffac-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.752368 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.752376 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2j4j\" (UniqueName: \"kubernetes.io/projected/36f0935c-a7a6-45af-a5ec-69aeb489ffac-kube-api-access-h2j4j\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.756759 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.780063 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-config-data" (OuterVolumeSpecName: "config-data") pod "36f0935c-a7a6-45af-a5ec-69aeb489ffac" (UID: "36f0935c-a7a6-45af-a5ec-69aeb489ffac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.812209 4660 scope.go:117] "RemoveContainer" containerID="2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.823424 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.834468 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.851385 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.851898 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="proxy-httpd" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.851922 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="proxy-httpd" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.851936 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-central-agent" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.851943 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-central-agent" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.851962 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818c3f20-b884-4578-980d-cccc395cbfcb" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.851971 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="818c3f20-b884-4578-980d-cccc395cbfcb" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.851991 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="sg-core" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.852000 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="sg-core" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.852027 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-notification-agent" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.852034 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-notification-agent" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.852253 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="proxy-httpd" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.852270 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-notification-agent" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.852278 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="ceilometer-central-agent" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.852319 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="818c3f20-b884-4578-980d-cccc395cbfcb" containerName="nova-cell1-novncproxy-novncproxy" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.852333 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" containerName="sg-core" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.853143 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.855536 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.855562 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f0935c-a7a6-45af-a5ec-69aeb489ffac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.860603 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.861375 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.861571 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.873794 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.881864 4660 scope.go:117] "RemoveContainer" containerID="151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.926974 4660 scope.go:117] "RemoveContainer" containerID="34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.957196 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.957265 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.957333 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.957359 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgx9\" (UniqueName: \"kubernetes.io/projected/544cff03-d589-4ba8-ac61-e5976fe393d9-kube-api-access-xwgx9\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.957414 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.958925 4660 scope.go:117] "RemoveContainer" containerID="c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.966916 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21\": container with ID starting with c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21 not found: ID does not exist" containerID="c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.966962 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21"} err="failed to get container status \"c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21\": rpc error: code = NotFound desc = could not find container \"c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21\": container with ID starting with c4ecbefd4ed1a5254f6b29877926b5992f4ff10f6ec4f396b0aa7c234968ca21 not found: ID does not exist" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.966991 4660 scope.go:117] "RemoveContainer" containerID="2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.967534 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674\": container with ID starting with 2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674 not found: ID does not exist" containerID="2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.967562 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674"} err="failed to get container status \"2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674\": rpc error: code = NotFound desc = could not find container \"2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674\": container with ID starting with 2a04f57a7955902b24bdd8f515dddcf9c3934aa72b275bc9a065405f5d823674 not found: ID does not exist" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.967580 4660 scope.go:117] "RemoveContainer" containerID="151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.968109 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf\": container with ID starting with 151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf not found: ID does not exist" containerID="151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.968132 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf"} err="failed to get container status \"151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf\": rpc error: code = NotFound desc = could not find container \"151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf\": container with ID starting with 151d7ffe7ab19d89a35d392273c8bd5368cc9133465c63e8323ce08447724dcf not found: ID does not exist" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.968149 4660 scope.go:117] "RemoveContainer" containerID="34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7" Nov 29 07:43:36 crc kubenswrapper[4660]: E1129 07:43:36.968490 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7\": container with ID starting with 34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7 not found: ID does not exist" containerID="34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7" Nov 29 07:43:36 crc kubenswrapper[4660]: I1129 07:43:36.968517 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7"} err="failed to get container status \"34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7\": rpc error: code = NotFound desc = could not find container \"34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7\": container with ID starting with 34ab5b117dfa44dc9024d1df16f67ede8108970ff0041a5a0ab5397921d606b7 not found: ID does not exist" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.063177 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.063248 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.063286 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.063313 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwgx9\" (UniqueName: \"kubernetes.io/projected/544cff03-d589-4ba8-ac61-e5976fe393d9-kube-api-access-xwgx9\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.063369 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.080323 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.084851 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.088186 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.092966 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cff03-d589-4ba8-ac61-e5976fe393d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.093696 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwgx9\" (UniqueName: \"kubernetes.io/projected/544cff03-d589-4ba8-ac61-e5976fe393d9-kube-api-access-xwgx9\") pod \"nova-cell1-novncproxy-0\" (UID: \"544cff03-d589-4ba8-ac61-e5976fe393d9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.189453 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.233382 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.247020 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.272407 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.274511 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.284728 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.285050 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.285167 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.303074 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.318600 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.372982 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-utilities\") pod \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373080 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r8p5\" (UniqueName: \"kubernetes.io/projected/aef12fa5-5d60-4ada-bb60-01a96ed61c48-kube-api-access-9r8p5\") pod \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373142 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-catalog-content\") pod \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\" (UID: \"aef12fa5-5d60-4ada-bb60-01a96ed61c48\") " Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373451 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxqcq\" (UniqueName: \"kubernetes.io/projected/764276f9-3bdf-4936-a57f-dc98650de4b7-kube-api-access-fxqcq\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373493 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-scripts\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373524 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764276f9-3bdf-4936-a57f-dc98650de4b7-log-httpd\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373553 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373573 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373640 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-config-data\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373663 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.373702 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764276f9-3bdf-4936-a57f-dc98650de4b7-run-httpd\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.374396 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-utilities" (OuterVolumeSpecName: "utilities") pod "aef12fa5-5d60-4ada-bb60-01a96ed61c48" (UID: "aef12fa5-5d60-4ada-bb60-01a96ed61c48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.380811 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef12fa5-5d60-4ada-bb60-01a96ed61c48-kube-api-access-9r8p5" (OuterVolumeSpecName: "kube-api-access-9r8p5") pod "aef12fa5-5d60-4ada-bb60-01a96ed61c48" (UID: "aef12fa5-5d60-4ada-bb60-01a96ed61c48"). InnerVolumeSpecName "kube-api-access-9r8p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.456496 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aef12fa5-5d60-4ada-bb60-01a96ed61c48" (UID: "aef12fa5-5d60-4ada-bb60-01a96ed61c48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477522 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477627 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-config-data\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477654 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477705 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764276f9-3bdf-4936-a57f-dc98650de4b7-run-httpd\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477751 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxqcq\" (UniqueName: \"kubernetes.io/projected/764276f9-3bdf-4936-a57f-dc98650de4b7-kube-api-access-fxqcq\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477790 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-scripts\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477824 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764276f9-3bdf-4936-a57f-dc98650de4b7-log-httpd\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477857 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.477907 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.478121 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r8p5\" (UniqueName: \"kubernetes.io/projected/aef12fa5-5d60-4ada-bb60-01a96ed61c48-kube-api-access-9r8p5\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.478143 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef12fa5-5d60-4ada-bb60-01a96ed61c48-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.478938 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764276f9-3bdf-4936-a57f-dc98650de4b7-run-httpd\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.479810 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/764276f9-3bdf-4936-a57f-dc98650de4b7-log-httpd\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.482825 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.483645 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-config-data\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.484371 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.484676 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-scripts\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.484708 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764276f9-3bdf-4936-a57f-dc98650de4b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.496060 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxqcq\" (UniqueName: \"kubernetes.io/projected/764276f9-3bdf-4936-a57f-dc98650de4b7-kube-api-access-fxqcq\") pod \"ceilometer-0\" (UID: \"764276f9-3bdf-4936-a57f-dc98650de4b7\") " pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.642287 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.703403 4660 generic.go:334] "Generic (PLEG): container finished" podID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerID="c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274" exitCode=0 Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.703527 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5rr5" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.718487 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f0935c-a7a6-45af-a5ec-69aeb489ffac" path="/var/lib/kubelet/pods/36f0935c-a7a6-45af-a5ec-69aeb489ffac/volumes" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.719413 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818c3f20-b884-4578-980d-cccc395cbfcb" path="/var/lib/kubelet/pods/818c3f20-b884-4578-980d-cccc395cbfcb/volumes" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.721270 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5rr5" event={"ID":"aef12fa5-5d60-4ada-bb60-01a96ed61c48","Type":"ContainerDied","Data":"c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274"} Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.721306 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5rr5" event={"ID":"aef12fa5-5d60-4ada-bb60-01a96ed61c48","Type":"ContainerDied","Data":"77cd5069b2d556b4eefc7e83f400201214d667d89bb27b91d43102f1b7fa900b"} Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.721342 4660 scope.go:117] "RemoveContainer" containerID="c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274" Nov 29 07:43:37 crc kubenswrapper[4660]: W1129 07:43:37.730704 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod544cff03_d589_4ba8_ac61_e5976fe393d9.slice/crio-9c1677888921261a9cf457d5e57b990418dc5f8fe4075b7503197e8eab09a9df WatchSource:0}: Error finding container 9c1677888921261a9cf457d5e57b990418dc5f8fe4075b7503197e8eab09a9df: Status 404 returned error can't find the container with id 9c1677888921261a9cf457d5e57b990418dc5f8fe4075b7503197e8eab09a9df Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.745782 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.764450 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5rr5"] Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.765272 4660 scope.go:117] "RemoveContainer" containerID="13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.853836 4660 scope.go:117] "RemoveContainer" containerID="4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.858491 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5rr5"] Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.958288 4660 scope.go:117] "RemoveContainer" containerID="c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274" Nov 29 07:43:37 crc kubenswrapper[4660]: E1129 07:43:37.962704 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274\": container with ID starting with c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274 not found: ID does not exist" containerID="c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.962743 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274"} err="failed to get container status \"c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274\": rpc error: code = NotFound desc = could not find container \"c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274\": container with ID starting with c1fa5cb31aded7d49fa7ee5d3efac8a9a420822f13528da4112f8660f0963274 not found: ID does not exist" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.962769 4660 scope.go:117] "RemoveContainer" containerID="13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9" Nov 29 07:43:37 crc kubenswrapper[4660]: E1129 07:43:37.963045 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9\": container with ID starting with 13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9 not found: ID does not exist" containerID="13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.963065 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9"} err="failed to get container status \"13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9\": rpc error: code = NotFound desc = could not find container \"13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9\": container with ID starting with 13dc0de7138fb657d67c40cd81b8cb9db71dbc31fe60cb8f32a514bdfc8dc8d9 not found: ID does not exist" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.963078 4660 scope.go:117] "RemoveContainer" containerID="4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439" Nov 29 07:43:37 crc kubenswrapper[4660]: E1129 07:43:37.964665 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439\": container with ID starting with 4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439 not found: ID does not exist" containerID="4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439" Nov 29 07:43:37 crc kubenswrapper[4660]: I1129 07:43:37.964693 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439"} err="failed to get container status \"4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439\": rpc error: code = NotFound desc = could not find container \"4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439\": container with ID starting with 4aea171b7fb82c4a1e5c84bcbc0e6ce5b4f08245728882ca586c9576d599f439 not found: ID does not exist" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.208351 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.391331 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.423115 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-config-data\") pod \"32036f0b-e420-4104-9560-38b2516339ba\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.423203 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5g79\" (UniqueName: \"kubernetes.io/projected/32036f0b-e420-4104-9560-38b2516339ba-kube-api-access-k5g79\") pod \"32036f0b-e420-4104-9560-38b2516339ba\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.423276 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-combined-ca-bundle\") pod \"32036f0b-e420-4104-9560-38b2516339ba\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.423347 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32036f0b-e420-4104-9560-38b2516339ba-logs\") pod \"32036f0b-e420-4104-9560-38b2516339ba\" (UID: \"32036f0b-e420-4104-9560-38b2516339ba\") " Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.424159 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32036f0b-e420-4104-9560-38b2516339ba-logs" (OuterVolumeSpecName: "logs") pod "32036f0b-e420-4104-9560-38b2516339ba" (UID: "32036f0b-e420-4104-9560-38b2516339ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.424447 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32036f0b-e420-4104-9560-38b2516339ba-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.429105 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32036f0b-e420-4104-9560-38b2516339ba-kube-api-access-k5g79" (OuterVolumeSpecName: "kube-api-access-k5g79") pod "32036f0b-e420-4104-9560-38b2516339ba" (UID: "32036f0b-e420-4104-9560-38b2516339ba"). InnerVolumeSpecName "kube-api-access-k5g79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.457889 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32036f0b-e420-4104-9560-38b2516339ba" (UID: "32036f0b-e420-4104-9560-38b2516339ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.490043 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-config-data" (OuterVolumeSpecName: "config-data") pod "32036f0b-e420-4104-9560-38b2516339ba" (UID: "32036f0b-e420-4104-9560-38b2516339ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.525816 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.525866 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5g79\" (UniqueName: \"kubernetes.io/projected/32036f0b-e420-4104-9560-38b2516339ba-kube-api-access-k5g79\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.525881 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32036f0b-e420-4104-9560-38b2516339ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.742944 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764276f9-3bdf-4936-a57f-dc98650de4b7","Type":"ContainerStarted","Data":"cfab9d953b8cb18532cdccc7cb2a03a249cd14b9107ce4c30518d30c6594a727"} Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.746229 4660 generic.go:334] "Generic (PLEG): container finished" podID="b1955589-00bc-4c74-9f66-e1a37e5e245d" containerID="665a409c77be3609c61bac2565451379ec8d1987565cf6667cfd10ba748a48f7" exitCode=137 Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.746283 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1955589-00bc-4c74-9f66-e1a37e5e245d","Type":"ContainerDied","Data":"665a409c77be3609c61bac2565451379ec8d1987565cf6667cfd10ba748a48f7"} Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.748019 4660 generic.go:334] "Generic (PLEG): container finished" podID="32036f0b-e420-4104-9560-38b2516339ba" containerID="6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94" exitCode=0 Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.748062 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32036f0b-e420-4104-9560-38b2516339ba","Type":"ContainerDied","Data":"6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94"} Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.748081 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32036f0b-e420-4104-9560-38b2516339ba","Type":"ContainerDied","Data":"1363ab53187e317a48c9611056675c5ee42d927b42b527dba2c32a0983789f21"} Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.748096 4660 scope.go:117] "RemoveContainer" containerID="6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.748149 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.763700 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"544cff03-d589-4ba8-ac61-e5976fe393d9","Type":"ContainerStarted","Data":"c1a71f76e018a96202d52c36079e05f98c2cb3e3b29dfbb36824bcc3fc6797e1"} Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.763738 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"544cff03-d589-4ba8-ac61-e5976fe393d9","Type":"ContainerStarted","Data":"9c1677888921261a9cf457d5e57b990418dc5f8fe4075b7503197e8eab09a9df"} Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.809948 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.809927921 podStartE2EDuration="2.809927921s" podCreationTimestamp="2025-11-29 07:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:38.777885016 +0000 UTC m=+1709.331414925" watchObservedRunningTime="2025-11-29 07:43:38.809927921 +0000 UTC m=+1709.363457820" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.834784 4660 scope.go:117] "RemoveContainer" containerID="30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.867309 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.870169 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.879786 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:38 crc kubenswrapper[4660]: E1129 07:43:38.880192 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-log" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880217 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-log" Nov 29 07:43:38 crc kubenswrapper[4660]: E1129 07:43:38.880243 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="extract-content" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880251 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="extract-content" Nov 29 07:43:38 crc kubenswrapper[4660]: E1129 07:43:38.880265 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="registry-server" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880274 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="registry-server" Nov 29 07:43:38 crc kubenswrapper[4660]: E1129 07:43:38.880289 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="extract-utilities" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880296 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="extract-utilities" Nov 29 07:43:38 crc kubenswrapper[4660]: E1129 07:43:38.880332 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-api" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880338 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-api" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880514 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-api" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880526 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" containerName="registry-server" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.880538 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="32036f0b-e420-4104-9560-38b2516339ba" containerName="nova-api-log" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.881739 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.883490 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.883795 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.884369 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.886562 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.913974 4660 scope.go:117] "RemoveContainer" containerID="6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94" Nov 29 07:43:38 crc kubenswrapper[4660]: E1129 07:43:38.919062 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94\": container with ID starting with 6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94 not found: ID does not exist" containerID="6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.919133 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94"} err="failed to get container status \"6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94\": rpc error: code = NotFound desc = could not find container \"6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94\": container with ID starting with 6845068b70cbdf6ed8af5cf2d8e900f96df3730862d729ae5cb06c142aadfb94 not found: ID does not exist" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.919159 4660 scope.go:117] "RemoveContainer" containerID="30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2" Nov 29 07:43:38 crc kubenswrapper[4660]: E1129 07:43:38.921794 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2\": container with ID starting with 30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2 not found: ID does not exist" containerID="30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.921817 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2"} err="failed to get container status \"30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2\": rpc error: code = NotFound desc = could not find container \"30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2\": container with ID starting with 30565f21aeb0b2f8b77fc84e03ff5034bb9cae5f5bd346d91e21380ed7b721a2 not found: ID does not exist" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.938069 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3361ac3d-f825-480e-aa86-1de9038e94ad-logs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.938118 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.938300 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhp2\" (UniqueName: \"kubernetes.io/projected/3361ac3d-f825-480e-aa86-1de9038e94ad-kube-api-access-snhp2\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.938362 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-config-data\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.938437 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-public-tls-certs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.938679 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:38 crc kubenswrapper[4660]: I1129 07:43:38.941446 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040251 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-combined-ca-bundle\") pod \"b1955589-00bc-4c74-9f66-e1a37e5e245d\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040320 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bflbc\" (UniqueName: \"kubernetes.io/projected/b1955589-00bc-4c74-9f66-e1a37e5e245d-kube-api-access-bflbc\") pod \"b1955589-00bc-4c74-9f66-e1a37e5e245d\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040377 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-config-data\") pod \"b1955589-00bc-4c74-9f66-e1a37e5e245d\" (UID: \"b1955589-00bc-4c74-9f66-e1a37e5e245d\") " Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040750 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3361ac3d-f825-480e-aa86-1de9038e94ad-logs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040772 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040902 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snhp2\" (UniqueName: \"kubernetes.io/projected/3361ac3d-f825-480e-aa86-1de9038e94ad-kube-api-access-snhp2\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040923 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-config-data\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040975 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-public-tls-certs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.040995 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.042247 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3361ac3d-f825-480e-aa86-1de9038e94ad-logs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.049703 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.050129 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.054020 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1955589-00bc-4c74-9f66-e1a37e5e245d-kube-api-access-bflbc" (OuterVolumeSpecName: "kube-api-access-bflbc") pod "b1955589-00bc-4c74-9f66-e1a37e5e245d" (UID: "b1955589-00bc-4c74-9f66-e1a37e5e245d"). InnerVolumeSpecName "kube-api-access-bflbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.056933 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-public-tls-certs\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.063143 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-config-data\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.072296 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snhp2\" (UniqueName: \"kubernetes.io/projected/3361ac3d-f825-480e-aa86-1de9038e94ad-kube-api-access-snhp2\") pod \"nova-api-0\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.102297 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1955589-00bc-4c74-9f66-e1a37e5e245d" (UID: "b1955589-00bc-4c74-9f66-e1a37e5e245d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.113179 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-config-data" (OuterVolumeSpecName: "config-data") pod "b1955589-00bc-4c74-9f66-e1a37e5e245d" (UID: "b1955589-00bc-4c74-9f66-e1a37e5e245d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.142541 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bflbc\" (UniqueName: \"kubernetes.io/projected/b1955589-00bc-4c74-9f66-e1a37e5e245d-kube-api-access-bflbc\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.143048 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.143184 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1955589-00bc-4c74-9f66-e1a37e5e245d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.223479 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.708438 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32036f0b-e420-4104-9560-38b2516339ba" path="/var/lib/kubelet/pods/32036f0b-e420-4104-9560-38b2516339ba/volumes" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.710049 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef12fa5-5d60-4ada-bb60-01a96ed61c48" path="/var/lib/kubelet/pods/aef12fa5-5d60-4ada-bb60-01a96ed61c48/volumes" Nov 29 07:43:39 crc kubenswrapper[4660]: W1129 07:43:39.744385 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3361ac3d_f825_480e_aa86_1de9038e94ad.slice/crio-f60e3d88f696f694cd818e8d69432e4e970a3e5a5f72e4e9e0db918d4efcf6f8 WatchSource:0}: Error finding container f60e3d88f696f694cd818e8d69432e4e970a3e5a5f72e4e9e0db918d4efcf6f8: Status 404 returned error can't find the container with id f60e3d88f696f694cd818e8d69432e4e970a3e5a5f72e4e9e0db918d4efcf6f8 Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.747725 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.776617 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764276f9-3bdf-4936-a57f-dc98650de4b7","Type":"ContainerStarted","Data":"58b6002332623ef9d3c9eaa22703663a83f3de8630ef4345caa73a02e9de5f39"} Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.776655 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764276f9-3bdf-4936-a57f-dc98650de4b7","Type":"ContainerStarted","Data":"bdea3f862c0b6529d8f4a80023dcc3261dac7751d472b8ab331afe4ec3144fb7"} Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.777501 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3361ac3d-f825-480e-aa86-1de9038e94ad","Type":"ContainerStarted","Data":"f60e3d88f696f694cd818e8d69432e4e970a3e5a5f72e4e9e0db918d4efcf6f8"} Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.780972 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1955589-00bc-4c74-9f66-e1a37e5e245d","Type":"ContainerDied","Data":"c8b116beecbc32f60815e7211da39e3727649bed6c2dc4c6f49aef5d43db07dd"} Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.781004 4660 scope.go:117] "RemoveContainer" containerID="665a409c77be3609c61bac2565451379ec8d1987565cf6667cfd10ba748a48f7" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.781087 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.817498 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.838033 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.850845 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:39 crc kubenswrapper[4660]: E1129 07:43:39.851249 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1955589-00bc-4c74-9f66-e1a37e5e245d" containerName="nova-scheduler-scheduler" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.851268 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1955589-00bc-4c74-9f66-e1a37e5e245d" containerName="nova-scheduler-scheduler" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.851462 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1955589-00bc-4c74-9f66-e1a37e5e245d" containerName="nova-scheduler-scheduler" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.852099 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.855681 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.858567 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.972717 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-config-data\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.972766 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mtvt\" (UniqueName: \"kubernetes.io/projected/3ea96248-f5f2-4c01-87db-82db0acdc763-kube-api-access-4mtvt\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:39 crc kubenswrapper[4660]: I1129 07:43:39.972932 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.074272 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-config-data\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.074319 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mtvt\" (UniqueName: \"kubernetes.io/projected/3ea96248-f5f2-4c01-87db-82db0acdc763-kube-api-access-4mtvt\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.074421 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.083242 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.083283 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-config-data\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.091516 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mtvt\" (UniqueName: \"kubernetes.io/projected/3ea96248-f5f2-4c01-87db-82db0acdc763-kube-api-access-4mtvt\") pod \"nova-scheduler-0\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.185733 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.748476 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.802763 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ea96248-f5f2-4c01-87db-82db0acdc763","Type":"ContainerStarted","Data":"3baf55be88fc398f635e2bef951adcae360880f6539ead635506ea32c67b2b02"} Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.806876 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3361ac3d-f825-480e-aa86-1de9038e94ad","Type":"ContainerStarted","Data":"5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697"} Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.806908 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3361ac3d-f825-480e-aa86-1de9038e94ad","Type":"ContainerStarted","Data":"b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc"} Nov 29 07:43:40 crc kubenswrapper[4660]: I1129 07:43:40.827577 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.827544836 podStartE2EDuration="2.827544836s" podCreationTimestamp="2025-11-29 07:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:40.824774625 +0000 UTC m=+1711.378304524" watchObservedRunningTime="2025-11-29 07:43:40.827544836 +0000 UTC m=+1711.381074735" Nov 29 07:43:41 crc kubenswrapper[4660]: I1129 07:43:41.708734 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1955589-00bc-4c74-9f66-e1a37e5e245d" path="/var/lib/kubelet/pods/b1955589-00bc-4c74-9f66-e1a37e5e245d/volumes" Nov 29 07:43:41 crc kubenswrapper[4660]: I1129 07:43:41.817444 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764276f9-3bdf-4936-a57f-dc98650de4b7","Type":"ContainerStarted","Data":"2aebf64c91d3a736bac4e22d51b648eb040de4f528373918c958d28381cdde8e"} Nov 29 07:43:41 crc kubenswrapper[4660]: I1129 07:43:41.820054 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ea96248-f5f2-4c01-87db-82db0acdc763","Type":"ContainerStarted","Data":"98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525"} Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.169594 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.190574 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.198996 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.198976893 podStartE2EDuration="3.198976893s" podCreationTimestamp="2025-11-29 07:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:41.844703916 +0000 UTC m=+1712.398233815" watchObservedRunningTime="2025-11-29 07:43:42.198976893 +0000 UTC m=+1712.752506792" Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.244988 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2gt82"] Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.245247 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" podUID="6136ba9b-f915-4d91-b474-229e02f382a2" containerName="dnsmasq-dns" containerID="cri-o://e47b38093f45d74b42130e1e15cbe990d777fb72cece1fad7f8cf16880af8ea2" gracePeriod=10 Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.838984 4660 generic.go:334] "Generic (PLEG): container finished" podID="6136ba9b-f915-4d91-b474-229e02f382a2" containerID="e47b38093f45d74b42130e1e15cbe990d777fb72cece1fad7f8cf16880af8ea2" exitCode=0 Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.839062 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" event={"ID":"6136ba9b-f915-4d91-b474-229e02f382a2","Type":"ContainerDied","Data":"e47b38093f45d74b42130e1e15cbe990d777fb72cece1fad7f8cf16880af8ea2"} Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.839328 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" event={"ID":"6136ba9b-f915-4d91-b474-229e02f382a2","Type":"ContainerDied","Data":"57ae512818a07773bd1efe35ac73e5361eed6e10a97d2c4c2c323692e8dcd91f"} Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.839355 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57ae512818a07773bd1efe35ac73e5361eed6e10a97d2c4c2c323692e8dcd91f" Nov 29 07:43:42 crc kubenswrapper[4660]: I1129 07:43:42.919450 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.056433 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvlwc\" (UniqueName: \"kubernetes.io/projected/6136ba9b-f915-4d91-b474-229e02f382a2-kube-api-access-tvlwc\") pod \"6136ba9b-f915-4d91-b474-229e02f382a2\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.056489 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-svc\") pod \"6136ba9b-f915-4d91-b474-229e02f382a2\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.056640 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-sb\") pod \"6136ba9b-f915-4d91-b474-229e02f382a2\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.056697 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-swift-storage-0\") pod \"6136ba9b-f915-4d91-b474-229e02f382a2\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.056745 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-config\") pod \"6136ba9b-f915-4d91-b474-229e02f382a2\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.056899 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-nb\") pod \"6136ba9b-f915-4d91-b474-229e02f382a2\" (UID: \"6136ba9b-f915-4d91-b474-229e02f382a2\") " Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.074771 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6136ba9b-f915-4d91-b474-229e02f382a2-kube-api-access-tvlwc" (OuterVolumeSpecName: "kube-api-access-tvlwc") pod "6136ba9b-f915-4d91-b474-229e02f382a2" (UID: "6136ba9b-f915-4d91-b474-229e02f382a2"). InnerVolumeSpecName "kube-api-access-tvlwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.159248 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvlwc\" (UniqueName: \"kubernetes.io/projected/6136ba9b-f915-4d91-b474-229e02f382a2-kube-api-access-tvlwc\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.226652 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6136ba9b-f915-4d91-b474-229e02f382a2" (UID: "6136ba9b-f915-4d91-b474-229e02f382a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.234434 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6136ba9b-f915-4d91-b474-229e02f382a2" (UID: "6136ba9b-f915-4d91-b474-229e02f382a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.260925 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.260959 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.335729 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6136ba9b-f915-4d91-b474-229e02f382a2" (UID: "6136ba9b-f915-4d91-b474-229e02f382a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.355121 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6136ba9b-f915-4d91-b474-229e02f382a2" (UID: "6136ba9b-f915-4d91-b474-229e02f382a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.360697 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-config" (OuterVolumeSpecName: "config") pod "6136ba9b-f915-4d91-b474-229e02f382a2" (UID: "6136ba9b-f915-4d91-b474-229e02f382a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.362926 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.362950 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.362959 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6136ba9b-f915-4d91-b474-229e02f382a2-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.852322 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2gt82" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.852339 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"764276f9-3bdf-4936-a57f-dc98650de4b7","Type":"ContainerStarted","Data":"54010e68849bd5a96286de60b7babe25bbd091e20468d1e2e203cb7041a705fa"} Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.853104 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.882188 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.174514802 podStartE2EDuration="6.882172046s" podCreationTimestamp="2025-11-29 07:43:37 +0000 UTC" firstStartedPulling="2025-11-29 07:43:38.214339592 +0000 UTC m=+1708.767869491" lastFinishedPulling="2025-11-29 07:43:42.921996836 +0000 UTC m=+1713.475526735" observedRunningTime="2025-11-29 07:43:43.877569189 +0000 UTC m=+1714.431099088" watchObservedRunningTime="2025-11-29 07:43:43.882172046 +0000 UTC m=+1714.435701945" Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.895862 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2gt82"] Nov 29 07:43:43 crc kubenswrapper[4660]: I1129 07:43:43.906199 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2gt82"] Nov 29 07:43:45 crc kubenswrapper[4660]: I1129 07:43:45.186676 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:43:45 crc kubenswrapper[4660]: I1129 07:43:45.704189 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6136ba9b-f915-4d91-b474-229e02f382a2" path="/var/lib/kubelet/pods/6136ba9b-f915-4d91-b474-229e02f382a2/volumes" Nov 29 07:43:47 crc kubenswrapper[4660]: I1129 07:43:47.191341 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:47 crc kubenswrapper[4660]: I1129 07:43:47.212358 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:47 crc kubenswrapper[4660]: I1129 07:43:47.908063 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.498164 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jx947"] Nov 29 07:43:48 crc kubenswrapper[4660]: E1129 07:43:48.498832 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6136ba9b-f915-4d91-b474-229e02f382a2" containerName="dnsmasq-dns" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.498846 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6136ba9b-f915-4d91-b474-229e02f382a2" containerName="dnsmasq-dns" Nov 29 07:43:48 crc kubenswrapper[4660]: E1129 07:43:48.498869 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6136ba9b-f915-4d91-b474-229e02f382a2" containerName="init" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.498875 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6136ba9b-f915-4d91-b474-229e02f382a2" containerName="init" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.499074 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6136ba9b-f915-4d91-b474-229e02f382a2" containerName="dnsmasq-dns" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.499660 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.502278 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.502642 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.508734 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jx947"] Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.576128 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxlk\" (UniqueName: \"kubernetes.io/projected/71369e18-7325-4509-9c64-2f59afb7513c-kube-api-access-pnxlk\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.576218 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-config-data\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.576236 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.576303 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-scripts\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.677521 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-config-data\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.677566 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.677632 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-scripts\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.677717 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxlk\" (UniqueName: \"kubernetes.io/projected/71369e18-7325-4509-9c64-2f59afb7513c-kube-api-access-pnxlk\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.683603 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-scripts\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.683833 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-config-data\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.685698 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.700964 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxlk\" (UniqueName: \"kubernetes.io/projected/71369e18-7325-4509-9c64-2f59afb7513c-kube-api-access-pnxlk\") pod \"nova-cell1-cell-mapping-jx947\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:48 crc kubenswrapper[4660]: I1129 07:43:48.817679 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:49 crc kubenswrapper[4660]: I1129 07:43:49.224964 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:43:49 crc kubenswrapper[4660]: I1129 07:43:49.226001 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:43:49 crc kubenswrapper[4660]: I1129 07:43:49.286209 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jx947"] Nov 29 07:43:49 crc kubenswrapper[4660]: I1129 07:43:49.908482 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jx947" event={"ID":"71369e18-7325-4509-9c64-2f59afb7513c","Type":"ContainerStarted","Data":"4455822cba4968313c1e901f313177b4707d2332c20ef84973d89548333aba4d"} Nov 29 07:43:49 crc kubenswrapper[4660]: I1129 07:43:49.908820 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jx947" event={"ID":"71369e18-7325-4509-9c64-2f59afb7513c","Type":"ContainerStarted","Data":"672f522537f7f88324540ee050522f168ac0cf8d2a00b6bde429dce0ab9ddc7f"} Nov 29 07:43:49 crc kubenswrapper[4660]: I1129 07:43:49.929753 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jx947" podStartSLOduration=1.929731586 podStartE2EDuration="1.929731586s" podCreationTimestamp="2025-11-29 07:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:43:49.928424242 +0000 UTC m=+1720.481954201" watchObservedRunningTime="2025-11-29 07:43:49.929731586 +0000 UTC m=+1720.483261485" Nov 29 07:43:50 crc kubenswrapper[4660]: I1129 07:43:50.189870 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 07:43:50 crc kubenswrapper[4660]: I1129 07:43:50.216347 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 07:43:50 crc kubenswrapper[4660]: I1129 07:43:50.242241 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:43:50 crc kubenswrapper[4660]: I1129 07:43:50.242976 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:43:50 crc kubenswrapper[4660]: I1129 07:43:50.693942 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:43:50 crc kubenswrapper[4660]: E1129 07:43:50.694146 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:43:50 crc kubenswrapper[4660]: I1129 07:43:50.943782 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 07:43:54 crc kubenswrapper[4660]: I1129 07:43:54.960209 4660 generic.go:334] "Generic (PLEG): container finished" podID="71369e18-7325-4509-9c64-2f59afb7513c" containerID="4455822cba4968313c1e901f313177b4707d2332c20ef84973d89548333aba4d" exitCode=0 Nov 29 07:43:54 crc kubenswrapper[4660]: I1129 07:43:54.960783 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jx947" event={"ID":"71369e18-7325-4509-9c64-2f59afb7513c","Type":"ContainerDied","Data":"4455822cba4968313c1e901f313177b4707d2332c20ef84973d89548333aba4d"} Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.349214 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.442070 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-combined-ca-bundle\") pod \"71369e18-7325-4509-9c64-2f59afb7513c\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.442124 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-config-data\") pod \"71369e18-7325-4509-9c64-2f59afb7513c\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.442184 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-scripts\") pod \"71369e18-7325-4509-9c64-2f59afb7513c\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.442276 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxlk\" (UniqueName: \"kubernetes.io/projected/71369e18-7325-4509-9c64-2f59afb7513c-kube-api-access-pnxlk\") pod \"71369e18-7325-4509-9c64-2f59afb7513c\" (UID: \"71369e18-7325-4509-9c64-2f59afb7513c\") " Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.449212 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-scripts" (OuterVolumeSpecName: "scripts") pod "71369e18-7325-4509-9c64-2f59afb7513c" (UID: "71369e18-7325-4509-9c64-2f59afb7513c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.450770 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71369e18-7325-4509-9c64-2f59afb7513c-kube-api-access-pnxlk" (OuterVolumeSpecName: "kube-api-access-pnxlk") pod "71369e18-7325-4509-9c64-2f59afb7513c" (UID: "71369e18-7325-4509-9c64-2f59afb7513c"). InnerVolumeSpecName "kube-api-access-pnxlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.475759 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-config-data" (OuterVolumeSpecName: "config-data") pod "71369e18-7325-4509-9c64-2f59afb7513c" (UID: "71369e18-7325-4509-9c64-2f59afb7513c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.494458 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71369e18-7325-4509-9c64-2f59afb7513c" (UID: "71369e18-7325-4509-9c64-2f59afb7513c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.544652 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxlk\" (UniqueName: \"kubernetes.io/projected/71369e18-7325-4509-9c64-2f59afb7513c-kube-api-access-pnxlk\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.544703 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.544713 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.544722 4660 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71369e18-7325-4509-9c64-2f59afb7513c-scripts\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.982986 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jx947" event={"ID":"71369e18-7325-4509-9c64-2f59afb7513c","Type":"ContainerDied","Data":"672f522537f7f88324540ee050522f168ac0cf8d2a00b6bde429dce0ab9ddc7f"} Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.983052 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="672f522537f7f88324540ee050522f168ac0cf8d2a00b6bde429dce0ab9ddc7f" Nov 29 07:43:56 crc kubenswrapper[4660]: I1129 07:43:56.983068 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jx947" Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.166166 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.166481 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-log" containerID="cri-o://b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc" gracePeriod=30 Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.166536 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-api" containerID="cri-o://5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697" gracePeriod=30 Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.178721 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.178919 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3ea96248-f5f2-4c01-87db-82db0acdc763" containerName="nova-scheduler-scheduler" containerID="cri-o://98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525" gracePeriod=30 Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.229661 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.230276 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-log" containerID="cri-o://71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1" gracePeriod=30 Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.230322 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-metadata" containerID="cri-o://2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4" gracePeriod=30 Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.997690 4660 generic.go:334] "Generic (PLEG): container finished" podID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerID="71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1" exitCode=143 Nov 29 07:43:57 crc kubenswrapper[4660]: I1129 07:43:57.997806 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c","Type":"ContainerDied","Data":"71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1"} Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.000083 4660 generic.go:334] "Generic (PLEG): container finished" podID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerID="b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc" exitCode=143 Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.000136 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3361ac3d-f825-480e-aa86-1de9038e94ad","Type":"ContainerDied","Data":"b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc"} Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.389818 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.477475 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mtvt\" (UniqueName: \"kubernetes.io/projected/3ea96248-f5f2-4c01-87db-82db0acdc763-kube-api-access-4mtvt\") pod \"3ea96248-f5f2-4c01-87db-82db0acdc763\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.477563 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-config-data\") pod \"3ea96248-f5f2-4c01-87db-82db0acdc763\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.477645 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-combined-ca-bundle\") pod \"3ea96248-f5f2-4c01-87db-82db0acdc763\" (UID: \"3ea96248-f5f2-4c01-87db-82db0acdc763\") " Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.487484 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea96248-f5f2-4c01-87db-82db0acdc763-kube-api-access-4mtvt" (OuterVolumeSpecName: "kube-api-access-4mtvt") pod "3ea96248-f5f2-4c01-87db-82db0acdc763" (UID: "3ea96248-f5f2-4c01-87db-82db0acdc763"). InnerVolumeSpecName "kube-api-access-4mtvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.510734 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ea96248-f5f2-4c01-87db-82db0acdc763" (UID: "3ea96248-f5f2-4c01-87db-82db0acdc763"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.512396 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-config-data" (OuterVolumeSpecName: "config-data") pod "3ea96248-f5f2-4c01-87db-82db0acdc763" (UID: "3ea96248-f5f2-4c01-87db-82db0acdc763"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.579894 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.579927 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mtvt\" (UniqueName: \"kubernetes.io/projected/3ea96248-f5f2-4c01-87db-82db0acdc763-kube-api-access-4mtvt\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:58 crc kubenswrapper[4660]: I1129 07:43:58.579939 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea96248-f5f2-4c01-87db-82db0acdc763-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.008921 4660 generic.go:334] "Generic (PLEG): container finished" podID="3ea96248-f5f2-4c01-87db-82db0acdc763" containerID="98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525" exitCode=0 Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.008971 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.008983 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ea96248-f5f2-4c01-87db-82db0acdc763","Type":"ContainerDied","Data":"98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525"} Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.009043 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ea96248-f5f2-4c01-87db-82db0acdc763","Type":"ContainerDied","Data":"3baf55be88fc398f635e2bef951adcae360880f6539ead635506ea32c67b2b02"} Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.009063 4660 scope.go:117] "RemoveContainer" containerID="98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.031875 4660 scope.go:117] "RemoveContainer" containerID="98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525" Nov 29 07:43:59 crc kubenswrapper[4660]: E1129 07:43:59.032307 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525\": container with ID starting with 98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525 not found: ID does not exist" containerID="98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.032345 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525"} err="failed to get container status \"98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525\": rpc error: code = NotFound desc = could not find container \"98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525\": container with ID starting with 98dfb4191ebb54e4376fac5b3378750890e219311eb6b1c8e3ac4bf580dc3525 not found: ID does not exist" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.041086 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.052496 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.064417 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:59 crc kubenswrapper[4660]: E1129 07:43:59.065061 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea96248-f5f2-4c01-87db-82db0acdc763" containerName="nova-scheduler-scheduler" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.065085 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea96248-f5f2-4c01-87db-82db0acdc763" containerName="nova-scheduler-scheduler" Nov 29 07:43:59 crc kubenswrapper[4660]: E1129 07:43:59.065106 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71369e18-7325-4509-9c64-2f59afb7513c" containerName="nova-manage" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.065114 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="71369e18-7325-4509-9c64-2f59afb7513c" containerName="nova-manage" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.065372 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea96248-f5f2-4c01-87db-82db0acdc763" containerName="nova-scheduler-scheduler" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.065409 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="71369e18-7325-4509-9c64-2f59afb7513c" containerName="nova-manage" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.066388 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.083532 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.086903 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.090599 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8hxn\" (UniqueName: \"kubernetes.io/projected/f0b8bc00-d486-430f-ad6d-483e3372519b-kube-api-access-d8hxn\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.090761 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b8bc00-d486-430f-ad6d-483e3372519b-config-data\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.091165 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b8bc00-d486-430f-ad6d-483e3372519b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.193337 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8hxn\" (UniqueName: \"kubernetes.io/projected/f0b8bc00-d486-430f-ad6d-483e3372519b-kube-api-access-d8hxn\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.193440 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b8bc00-d486-430f-ad6d-483e3372519b-config-data\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.193514 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b8bc00-d486-430f-ad6d-483e3372519b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.198906 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b8bc00-d486-430f-ad6d-483e3372519b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.202486 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b8bc00-d486-430f-ad6d-483e3372519b-config-data\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.209450 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8hxn\" (UniqueName: \"kubernetes.io/projected/f0b8bc00-d486-430f-ad6d-483e3372519b-kube-api-access-d8hxn\") pod \"nova-scheduler-0\" (UID: \"f0b8bc00-d486-430f-ad6d-483e3372519b\") " pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.390291 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.705174 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea96248-f5f2-4c01-87db-82db0acdc763" path="/var/lib/kubelet/pods/3ea96248-f5f2-4c01-87db-82db0acdc763/volumes" Nov 29 07:43:59 crc kubenswrapper[4660]: I1129 07:43:59.841426 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 29 07:43:59 crc kubenswrapper[4660]: W1129 07:43:59.848902 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b8bc00_d486_430f_ad6d_483e3372519b.slice/crio-a579155c8ec9df7b3e13ae4c3112baa6c7b8571a8666c6964b369c80a465b37c WatchSource:0}: Error finding container a579155c8ec9df7b3e13ae4c3112baa6c7b8571a8666c6964b369c80a465b37c: Status 404 returned error can't find the container with id a579155c8ec9df7b3e13ae4c3112baa6c7b8571a8666c6964b369c80a465b37c Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.029900 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0b8bc00-d486-430f-ad6d-483e3372519b","Type":"ContainerStarted","Data":"a579155c8ec9df7b3e13ae4c3112baa6c7b8571a8666c6964b369c80a465b37c"} Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.364751 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:43906->10.217.0.196:8775: read: connection reset by peer" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.364751 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:43892->10.217.0.196:8775: read: connection reset by peer" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.764986 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.820741 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-internal-tls-certs\") pod \"3361ac3d-f825-480e-aa86-1de9038e94ad\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.820980 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-config-data\") pod \"3361ac3d-f825-480e-aa86-1de9038e94ad\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.821126 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-combined-ca-bundle\") pod \"3361ac3d-f825-480e-aa86-1de9038e94ad\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.821494 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snhp2\" (UniqueName: \"kubernetes.io/projected/3361ac3d-f825-480e-aa86-1de9038e94ad-kube-api-access-snhp2\") pod \"3361ac3d-f825-480e-aa86-1de9038e94ad\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.821666 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-public-tls-certs\") pod \"3361ac3d-f825-480e-aa86-1de9038e94ad\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.821856 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3361ac3d-f825-480e-aa86-1de9038e94ad-logs\") pod \"3361ac3d-f825-480e-aa86-1de9038e94ad\" (UID: \"3361ac3d-f825-480e-aa86-1de9038e94ad\") " Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.822675 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3361ac3d-f825-480e-aa86-1de9038e94ad-logs" (OuterVolumeSpecName: "logs") pod "3361ac3d-f825-480e-aa86-1de9038e94ad" (UID: "3361ac3d-f825-480e-aa86-1de9038e94ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.826436 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3361ac3d-f825-480e-aa86-1de9038e94ad-kube-api-access-snhp2" (OuterVolumeSpecName: "kube-api-access-snhp2") pod "3361ac3d-f825-480e-aa86-1de9038e94ad" (UID: "3361ac3d-f825-480e-aa86-1de9038e94ad"). InnerVolumeSpecName "kube-api-access-snhp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.851560 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-config-data" (OuterVolumeSpecName: "config-data") pod "3361ac3d-f825-480e-aa86-1de9038e94ad" (UID: "3361ac3d-f825-480e-aa86-1de9038e94ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.855818 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3361ac3d-f825-480e-aa86-1de9038e94ad" (UID: "3361ac3d-f825-480e-aa86-1de9038e94ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.882403 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3361ac3d-f825-480e-aa86-1de9038e94ad" (UID: "3361ac3d-f825-480e-aa86-1de9038e94ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.894756 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3361ac3d-f825-480e-aa86-1de9038e94ad" (UID: "3361ac3d-f825-480e-aa86-1de9038e94ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.923805 4660 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.924091 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3361ac3d-f825-480e-aa86-1de9038e94ad-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.924176 4660 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.924435 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.924549 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3361ac3d-f825-480e-aa86-1de9038e94ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.924642 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snhp2\" (UniqueName: \"kubernetes.io/projected/3361ac3d-f825-480e-aa86-1de9038e94ad-kube-api-access-snhp2\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:00 crc kubenswrapper[4660]: I1129 07:44:00.925792 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.025709 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-logs\") pod \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.025766 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-combined-ca-bundle\") pod \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.025810 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-config-data\") pod \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.025831 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqv6t\" (UniqueName: \"kubernetes.io/projected/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-kube-api-access-dqv6t\") pod \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.025886 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-nova-metadata-tls-certs\") pod \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\" (UID: \"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c\") " Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.027456 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-logs" (OuterVolumeSpecName: "logs") pod "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" (UID: "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.031289 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-kube-api-access-dqv6t" (OuterVolumeSpecName: "kube-api-access-dqv6t") pod "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" (UID: "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c"). InnerVolumeSpecName "kube-api-access-dqv6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.043173 4660 generic.go:334] "Generic (PLEG): container finished" podID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerID="2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4" exitCode=0 Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.043274 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c","Type":"ContainerDied","Data":"2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4"} Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.043306 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54a8b6d-6b2b-4fd7-918b-8443c8133a4c","Type":"ContainerDied","Data":"70ae4c1aaf031f7cb84aa05dbfdc9eb1b3392abbbb4aad4d34a09138dbf395f7"} Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.043326 4660 scope.go:117] "RemoveContainer" containerID="2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.043446 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.048160 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0b8bc00-d486-430f-ad6d-483e3372519b","Type":"ContainerStarted","Data":"f728735a99382eaec42fb8b1cf3b744376a09cf474a4017169b70ec9ee7af8e0"} Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.054555 4660 generic.go:334] "Generic (PLEG): container finished" podID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerID="5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697" exitCode=0 Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.054672 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3361ac3d-f825-480e-aa86-1de9038e94ad","Type":"ContainerDied","Data":"5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697"} Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.054747 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3361ac3d-f825-480e-aa86-1de9038e94ad","Type":"ContainerDied","Data":"f60e3d88f696f694cd818e8d69432e4e970a3e5a5f72e4e9e0db918d4efcf6f8"} Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.054959 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.061131 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-config-data" (OuterVolumeSpecName: "config-data") pod "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" (UID: "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.081287 4660 scope.go:117] "RemoveContainer" containerID="71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.109688 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" (UID: "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.119814 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" (UID: "f54a8b6d-6b2b-4fd7-918b-8443c8133a4c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.132117 4660 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-logs\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.132414 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.132509 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.132592 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqv6t\" (UniqueName: \"kubernetes.io/projected/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-kube-api-access-dqv6t\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.133773 4660 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.140337 4660 scope.go:117] "RemoveContainer" containerID="2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4" Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.141495 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4\": container with ID starting with 2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4 not found: ID does not exist" containerID="2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.141538 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4"} err="failed to get container status \"2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4\": rpc error: code = NotFound desc = could not find container \"2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4\": container with ID starting with 2590cb31165e0f49dba9fca3fae5bbea0a51b2e8d0e85eee8f7c9639e33861b4 not found: ID does not exist" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.141561 4660 scope.go:117] "RemoveContainer" containerID="71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1" Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.142994 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1\": container with ID starting with 71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1 not found: ID does not exist" containerID="71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.143042 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1"} err="failed to get container status \"71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1\": rpc error: code = NotFound desc = could not find container \"71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1\": container with ID starting with 71e02b83fb77f1da15918e53aa52bdd6b22cf565b8666e1b1f8f87a159b273f1 not found: ID does not exist" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.143071 4660 scope.go:117] "RemoveContainer" containerID="5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.152936 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.15291852 podStartE2EDuration="2.15291852s" podCreationTimestamp="2025-11-29 07:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:01.081945573 +0000 UTC m=+1731.635475472" watchObservedRunningTime="2025-11-29 07:44:01.15291852 +0000 UTC m=+1731.706448419" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.158998 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.187816 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.211186 4660 scope.go:117] "RemoveContainer" containerID="b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239158 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.239510 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-log" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239528 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-log" Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.239547 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-log" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239554 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-log" Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.239564 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-metadata" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239569 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-metadata" Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.239580 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-api" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239585 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-api" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239769 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-log" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239788 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-metadata" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239797 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" containerName="nova-api-api" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.239813 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" containerName="nova-metadata-log" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.240952 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.244540 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.245957 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.246007 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.246167 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.264121 4660 scope.go:117] "RemoveContainer" containerID="5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697" Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.265076 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697\": container with ID starting with 5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697 not found: ID does not exist" containerID="5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.265124 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697"} err="failed to get container status \"5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697\": rpc error: code = NotFound desc = could not find container \"5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697\": container with ID starting with 5c1ca5f73d056a81bab7603535afe1c7d07da79fa4dc10344142c598b000f697 not found: ID does not exist" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.265154 4660 scope.go:117] "RemoveContainer" containerID="b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc" Nov 29 07:44:01 crc kubenswrapper[4660]: E1129 07:44:01.265467 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc\": container with ID starting with b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc not found: ID does not exist" containerID="b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.265498 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc"} err="failed to get container status \"b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc\": rpc error: code = NotFound desc = could not find container \"b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc\": container with ID starting with b7374f98ce6cb8d9a6d698d70618623fc881d8f10a94caa6c3f971bc459a04dc not found: ID does not exist" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.381399 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.392062 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.402969 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.404844 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.408054 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.409788 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.419703 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.437842 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xhc\" (UniqueName: \"kubernetes.io/projected/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-kube-api-access-67xhc\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.437951 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-config-data\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.437994 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.438026 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.438096 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-logs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.438176 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.540409 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.540484 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.540943 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fbec32-e360-48a4-802f-acafba9315fc-logs\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.541010 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-logs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.541051 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-config-data\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.541098 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.541191 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.541230 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.541270 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xhc\" (UniqueName: \"kubernetes.io/projected/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-kube-api-access-67xhc\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.541488 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-logs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.542024 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcvc\" (UniqueName: \"kubernetes.io/projected/e8fbec32-e360-48a4-802f-acafba9315fc-kube-api-access-5jcvc\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.542143 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-config-data\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.544236 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.544381 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.545869 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.546711 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-config-data\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.562416 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xhc\" (UniqueName: \"kubernetes.io/projected/2bdf1a62-5e19-4a99-9950-3208cdb8cd0b-kube-api-access-67xhc\") pod \"nova-api-0\" (UID: \"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b\") " pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.573526 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.643109 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-config-data\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.643366 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.643472 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.643588 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcvc\" (UniqueName: \"kubernetes.io/projected/e8fbec32-e360-48a4-802f-acafba9315fc-kube-api-access-5jcvc\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.643732 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fbec32-e360-48a4-802f-acafba9315fc-logs\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.644098 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fbec32-e360-48a4-802f-acafba9315fc-logs\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.647481 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-config-data\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.651315 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.666113 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fbec32-e360-48a4-802f-acafba9315fc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:01 crc kubenswrapper[4660]: I1129 07:44:01.676157 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcvc\" (UniqueName: \"kubernetes.io/projected/e8fbec32-e360-48a4-802f-acafba9315fc-kube-api-access-5jcvc\") pod \"nova-metadata-0\" (UID: \"e8fbec32-e360-48a4-802f-acafba9315fc\") " pod="openstack/nova-metadata-0" Nov 29 07:44:02 crc kubenswrapper[4660]: I1129 07:44:01.713528 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3361ac3d-f825-480e-aa86-1de9038e94ad" path="/var/lib/kubelet/pods/3361ac3d-f825-480e-aa86-1de9038e94ad/volumes" Nov 29 07:44:02 crc kubenswrapper[4660]: I1129 07:44:01.714677 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54a8b6d-6b2b-4fd7-918b-8443c8133a4c" path="/var/lib/kubelet/pods/f54a8b6d-6b2b-4fd7-918b-8443c8133a4c/volumes" Nov 29 07:44:02 crc kubenswrapper[4660]: I1129 07:44:01.723832 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 29 07:44:02 crc kubenswrapper[4660]: W1129 07:44:02.747717 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bdf1a62_5e19_4a99_9950_3208cdb8cd0b.slice/crio-f49b330a247b1b8f1dce4d6aa50acf66df326d94344bb8dc69d0b4d4cdbefde5 WatchSource:0}: Error finding container f49b330a247b1b8f1dce4d6aa50acf66df326d94344bb8dc69d0b4d4cdbefde5: Status 404 returned error can't find the container with id f49b330a247b1b8f1dce4d6aa50acf66df326d94344bb8dc69d0b4d4cdbefde5 Nov 29 07:44:02 crc kubenswrapper[4660]: I1129 07:44:02.762448 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 29 07:44:02 crc kubenswrapper[4660]: I1129 07:44:02.842765 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 29 07:44:02 crc kubenswrapper[4660]: W1129 07:44:02.842962 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8fbec32_e360_48a4_802f_acafba9315fc.slice/crio-2046950def4c2d08a2ef470f227c326c533ee02640a7d065a5a091cd6a3165ae WatchSource:0}: Error finding container 2046950def4c2d08a2ef470f227c326c533ee02640a7d065a5a091cd6a3165ae: Status 404 returned error can't find the container with id 2046950def4c2d08a2ef470f227c326c533ee02640a7d065a5a091cd6a3165ae Nov 29 07:44:03 crc kubenswrapper[4660]: I1129 07:44:03.090790 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b","Type":"ContainerStarted","Data":"9be81035b072b82d5bb972112453ccfa6b29c73732cf8f9440906ef1c9f55e2e"} Nov 29 07:44:03 crc kubenswrapper[4660]: I1129 07:44:03.090831 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b","Type":"ContainerStarted","Data":"f49b330a247b1b8f1dce4d6aa50acf66df326d94344bb8dc69d0b4d4cdbefde5"} Nov 29 07:44:03 crc kubenswrapper[4660]: I1129 07:44:03.092262 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fbec32-e360-48a4-802f-acafba9315fc","Type":"ContainerStarted","Data":"c2035ff7ddfb9c037365109f5f5e1a392d5efc74d5dda40256de3f9e737640e7"} Nov 29 07:44:03 crc kubenswrapper[4660]: I1129 07:44:03.092354 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fbec32-e360-48a4-802f-acafba9315fc","Type":"ContainerStarted","Data":"2046950def4c2d08a2ef470f227c326c533ee02640a7d065a5a091cd6a3165ae"} Nov 29 07:44:03 crc kubenswrapper[4660]: I1129 07:44:03.815491 4660 scope.go:117] "RemoveContainer" containerID="3732f70fa3de06ae3fab7d1e6ecf3188a3303e1e9315d4418e2d0043eeb22b5b" Nov 29 07:44:03 crc kubenswrapper[4660]: I1129 07:44:03.848552 4660 scope.go:117] "RemoveContainer" containerID="59ccd31e20601580f9b94f1b845b510b131262eaa48534a26444589237b64395" Nov 29 07:44:04 crc kubenswrapper[4660]: I1129 07:44:04.110818 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2bdf1a62-5e19-4a99-9950-3208cdb8cd0b","Type":"ContainerStarted","Data":"54707946a9aa7684000f11ca6bcbd9468d5c266ffd8f1efe8b4d4f8c2f17df96"} Nov 29 07:44:04 crc kubenswrapper[4660]: I1129 07:44:04.114521 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fbec32-e360-48a4-802f-acafba9315fc","Type":"ContainerStarted","Data":"536e3fb52ff6693ccf8c965450e660e101689410d657ead5e321f0f161c49866"} Nov 29 07:44:04 crc kubenswrapper[4660]: I1129 07:44:04.139155 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.139127719 podStartE2EDuration="3.139127719s" podCreationTimestamp="2025-11-29 07:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:04.137119497 +0000 UTC m=+1734.690649396" watchObservedRunningTime="2025-11-29 07:44:04.139127719 +0000 UTC m=+1734.692657628" Nov 29 07:44:04 crc kubenswrapper[4660]: I1129 07:44:04.166951 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.166912146 podStartE2EDuration="3.166912146s" podCreationTimestamp="2025-11-29 07:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:04.161186361 +0000 UTC m=+1734.714716270" watchObservedRunningTime="2025-11-29 07:44:04.166912146 +0000 UTC m=+1734.720442055" Nov 29 07:44:04 crc kubenswrapper[4660]: I1129 07:44:04.391668 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 29 07:44:04 crc kubenswrapper[4660]: I1129 07:44:04.694132 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:44:04 crc kubenswrapper[4660]: E1129 07:44:04.694431 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:44:06 crc kubenswrapper[4660]: I1129 07:44:06.724683 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:44:06 crc kubenswrapper[4660]: I1129 07:44:06.724942 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 29 07:44:07 crc kubenswrapper[4660]: I1129 07:44:07.649381 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 29 07:44:09 crc kubenswrapper[4660]: I1129 07:44:09.390878 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 29 07:44:09 crc kubenswrapper[4660]: I1129 07:44:09.420842 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 29 07:44:10 crc kubenswrapper[4660]: I1129 07:44:10.204652 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 29 07:44:11 crc kubenswrapper[4660]: I1129 07:44:11.574503 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:44:11 crc kubenswrapper[4660]: I1129 07:44:11.574861 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 29 07:44:11 crc kubenswrapper[4660]: I1129 07:44:11.725705 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:44:11 crc kubenswrapper[4660]: I1129 07:44:11.725748 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 29 07:44:12 crc kubenswrapper[4660]: I1129 07:44:12.587901 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2bdf1a62-5e19-4a99-9950-3208cdb8cd0b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 07:44:12 crc kubenswrapper[4660]: I1129 07:44:12.588242 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2bdf1a62-5e19-4a99-9950-3208cdb8cd0b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:44:12 crc kubenswrapper[4660]: I1129 07:44:12.737868 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8fbec32-e360-48a4-802f-acafba9315fc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:44:12 crc kubenswrapper[4660]: I1129 07:44:12.737870 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8fbec32-e360-48a4-802f-acafba9315fc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:44:16 crc kubenswrapper[4660]: I1129 07:44:16.693936 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:44:16 crc kubenswrapper[4660]: E1129 07:44:16.695380 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.581920 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.582495 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.582793 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.582819 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.589351 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.589458 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.841827 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.963603 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 29 07:44:21 crc kubenswrapper[4660]: I1129 07:44:21.963694 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:44:22 crc kubenswrapper[4660]: I1129 07:44:22.299091 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 29 07:44:31 crc kubenswrapper[4660]: I1129 07:44:31.193352 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:44:31 crc kubenswrapper[4660]: I1129 07:44:31.693599 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:44:31 crc kubenswrapper[4660]: E1129 07:44:31.693867 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:44:32 crc kubenswrapper[4660]: I1129 07:44:32.075824 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:44:36 crc kubenswrapper[4660]: I1129 07:44:36.625328 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" containerID="cri-o://354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b" gracePeriod=604795 Nov 29 07:44:36 crc kubenswrapper[4660]: I1129 07:44:36.791123 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" containerID="cri-o://542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6" gracePeriod=604796 Nov 29 07:44:39 crc kubenswrapper[4660]: I1129 07:44:39.146029 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 29 07:44:39 crc kubenswrapper[4660]: I1129 07:44:39.806773 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.335094 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.345158 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433540 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-server-conf\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433666 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-plugins-conf\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433692 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffz4b\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-kube-api-access-ffz4b\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433719 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-confd\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433789 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-config-data\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433832 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a408d44-6909-4748-9b8e-72da66b0afea-erlang-cookie-secret\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433862 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433892 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-erlang-cookie\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433918 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-plugins-conf\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433947 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-tls\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.433983 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434021 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-config-data\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434070 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0604115a-3f3a-4061-bb63-ada6ebb5d458-pod-info\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434108 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-plugins\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434157 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-plugins\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434189 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-server-conf\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434219 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-erlang-cookie\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434240 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0604115a-3f3a-4061-bb63-ada6ebb5d458-erlang-cookie-secret\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434292 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhc99\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-kube-api-access-rhc99\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434332 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-tls\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434366 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a408d44-6909-4748-9b8e-72da66b0afea-pod-info\") pod \"0a408d44-6909-4748-9b8e-72da66b0afea\" (UID: \"0a408d44-6909-4748-9b8e-72da66b0afea\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.434392 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-confd\") pod \"0604115a-3f3a-4061-bb63-ada6ebb5d458\" (UID: \"0604115a-3f3a-4061-bb63-ada6ebb5d458\") " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.437963 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.443414 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.452683 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.459752 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a408d44-6909-4748-9b8e-72da66b0afea-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.462510 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.463730 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-kube-api-access-ffz4b" (OuterVolumeSpecName: "kube-api-access-ffz4b") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "kube-api-access-ffz4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.464360 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.464742 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.473783 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-kube-api-access-rhc99" (OuterVolumeSpecName: "kube-api-access-rhc99") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "kube-api-access-rhc99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.473852 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.474252 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.484384 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0604115a-3f3a-4061-bb63-ada6ebb5d458-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.496501 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.500641 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.501014 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0a408d44-6909-4748-9b8e-72da66b0afea-pod-info" (OuterVolumeSpecName: "pod-info") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.502966 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0604115a-3f3a-4061-bb63-ada6ebb5d458-pod-info" (OuterVolumeSpecName: "pod-info") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.529208 4660 generic.go:334] "Generic (PLEG): container finished" podID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerID="542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6" exitCode=0 Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.529290 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0604115a-3f3a-4061-bb63-ada6ebb5d458","Type":"ContainerDied","Data":"542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6"} Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.529338 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0604115a-3f3a-4061-bb63-ada6ebb5d458","Type":"ContainerDied","Data":"7a0482640c44c50ba843984db17eefcdab81d00dd75b0a5cac65e01a6a1bc8a4"} Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.529360 4660 scope.go:117] "RemoveContainer" containerID="542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.529395 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.535294 4660 generic.go:334] "Generic (PLEG): container finished" podID="0a408d44-6909-4748-9b8e-72da66b0afea" containerID="354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b" exitCode=0 Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.535335 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a408d44-6909-4748-9b8e-72da66b0afea","Type":"ContainerDied","Data":"354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b"} Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.535362 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a408d44-6909-4748-9b8e-72da66b0afea","Type":"ContainerDied","Data":"aec1ffb48dea3abbf284d0dd849ee8b69ea922c7b4d83837abc2e8b63e6bd3b7"} Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536333 4660 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0604115a-3f3a-4061-bb63-ada6ebb5d458-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536362 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536374 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536386 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536397 4660 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0604115a-3f3a-4061-bb63-ada6ebb5d458-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536407 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhc99\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-kube-api-access-rhc99\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536416 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536427 4660 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a408d44-6909-4748-9b8e-72da66b0afea-pod-info\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536437 4660 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536447 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffz4b\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-kube-api-access-ffz4b\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536459 4660 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a408d44-6909-4748-9b8e-72da66b0afea-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536488 4660 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536499 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536514 4660 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536526 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536540 4660 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.536740 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.561857 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-server-conf" (OuterVolumeSpecName: "server-conf") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.562300 4660 scope.go:117] "RemoveContainer" containerID="77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.584743 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-config-data" (OuterVolumeSpecName: "config-data") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.605561 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-config-data" (OuterVolumeSpecName: "config-data") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.605950 4660 scope.go:117] "RemoveContainer" containerID="542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6" Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.607198 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6\": container with ID starting with 542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6 not found: ID does not exist" containerID="542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.607237 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6"} err="failed to get container status \"542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6\": rpc error: code = NotFound desc = could not find container \"542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6\": container with ID starting with 542463bdc6a7f3489823a44cd5404a92c2d9e03f0a6dd675858393bb64d874c6 not found: ID does not exist" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.607279 4660 scope.go:117] "RemoveContainer" containerID="77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe" Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.607587 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe\": container with ID starting with 77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe not found: ID does not exist" containerID="77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.607632 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe"} err="failed to get container status \"77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe\": rpc error: code = NotFound desc = could not find container \"77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe\": container with ID starting with 77fae1ee7b1fbc7f4fe02d4ed91a38e3ebc741c8cc91d8c30e50b1046283bebe not found: ID does not exist" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.607646 4660 scope.go:117] "RemoveContainer" containerID="354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.609276 4660 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.623167 4660 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.645665 4660 scope.go:117] "RemoveContainer" containerID="3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.647749 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-server-conf" (OuterVolumeSpecName: "server-conf") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.648164 4660 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.648242 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.648310 4660 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.648368 4660 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0604115a-3f3a-4061-bb63-ada6ebb5d458-server-conf\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.648416 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a408d44-6909-4748-9b8e-72da66b0afea-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.648485 4660 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.693591 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0a408d44-6909-4748-9b8e-72da66b0afea" (UID: "0a408d44-6909-4748-9b8e-72da66b0afea"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.693661 4660 scope.go:117] "RemoveContainer" containerID="354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b" Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.694966 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b\": container with ID starting with 354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b not found: ID does not exist" containerID="354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.694997 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b"} err="failed to get container status \"354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b\": rpc error: code = NotFound desc = could not find container \"354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b\": container with ID starting with 354aa174d23217ec83cd2186206e3589c521641b6d427e0d487518edb2b0e69b not found: ID does not exist" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.695021 4660 scope.go:117] "RemoveContainer" containerID="3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7" Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.700016 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7\": container with ID starting with 3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7 not found: ID does not exist" containerID="3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.700073 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7"} err="failed to get container status \"3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7\": rpc error: code = NotFound desc = could not find container \"3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7\": container with ID starting with 3ba0b7412dda797dba247bed6e75f27ecebb66d0e78ed11548d547932f85a9d7 not found: ID does not exist" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.750100 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a408d44-6909-4748-9b8e-72da66b0afea-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.774488 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0604115a-3f3a-4061-bb63-ada6ebb5d458" (UID: "0604115a-3f3a-4061-bb63-ada6ebb5d458"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.852701 4660 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0604115a-3f3a-4061-bb63-ada6ebb5d458-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.873149 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.887992 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.906445 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.906901 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="setup-container" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.906926 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="setup-container" Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.906948 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="setup-container" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.906957 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="setup-container" Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.906986 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.906993 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" Nov 29 07:44:43 crc kubenswrapper[4660]: E1129 07:44:43.907010 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.907018 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.907227 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" containerName="rabbitmq" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.907284 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" containerName="rabbitmq" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.908989 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.913176 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.913418 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.913443 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.913691 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.913861 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.913982 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fczw9" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.914122 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.949683 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.950120 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:44:43 crc kubenswrapper[4660]: I1129 07:44:43.988016 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.038702 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.042040 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.045802 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.045946 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.046051 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.046144 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.046235 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.047072 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.047993 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-g9twm" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.048104 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.057985 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058023 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058046 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbmh\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-kube-api-access-5kbmh\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058065 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058102 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/147cd78f-2d01-48d5-b43b-eda3532cf537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058145 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/147cd78f-2d01-48d5-b43b-eda3532cf537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058161 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058202 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058219 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058286 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.058318 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.159764 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kbmh\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-kube-api-access-5kbmh\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.159994 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160155 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/147cd78f-2d01-48d5-b43b-eda3532cf537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160248 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/147cd78f-2d01-48d5-b43b-eda3532cf537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160340 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160447 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160539 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160648 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160765 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.160916 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.161037 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.161193 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.161303 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.161391 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.161474 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.161534 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.161532 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.162747 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.162872 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.162988 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.163192 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnj5c\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-kube-api-access-bnj5c\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.163298 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.163418 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.163525 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.163831 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/147cd78f-2d01-48d5-b43b-eda3532cf537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.162814 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.164603 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.165508 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.166662 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/147cd78f-2d01-48d5-b43b-eda3532cf537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.167928 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.168372 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.171911 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/147cd78f-2d01-48d5-b43b-eda3532cf537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.178939 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kbmh\" (UniqueName: \"kubernetes.io/projected/147cd78f-2d01-48d5-b43b-eda3532cf537-kube-api-access-5kbmh\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.212683 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"147cd78f-2d01-48d5-b43b-eda3532cf537\") " pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.235306 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.269781 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.270133 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.270348 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.270807 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.272090 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.272431 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnj5c\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-kube-api-access-bnj5c\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.272889 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.275038 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.276255 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.276349 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.276490 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.273383 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.271115 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.277743 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.271745 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.273315 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.272004 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.280151 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.300892 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.302432 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.314400 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.336772 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnj5c\" (UniqueName: \"kubernetes.io/projected/b51d872c-13ff-4e5a-9c3b-dc644c7c19d6-kube-api-access-bnj5c\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.438899 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6\") " pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.668310 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 29 07:44:44 crc kubenswrapper[4660]: I1129 07:44:44.697526 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:44:44 crc kubenswrapper[4660]: E1129 07:44:44.697802 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:44:45 crc kubenswrapper[4660]: I1129 07:44:45.060074 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 29 07:44:45 crc kubenswrapper[4660]: I1129 07:44:45.202105 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 29 07:44:45 crc kubenswrapper[4660]: I1129 07:44:45.584142 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"147cd78f-2d01-48d5-b43b-eda3532cf537","Type":"ContainerStarted","Data":"cc3b288b1b2392f7adc11f99512fc628f0997d0b48267c0d4ed2b426bce76703"} Nov 29 07:44:45 crc kubenswrapper[4660]: I1129 07:44:45.585679 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6","Type":"ContainerStarted","Data":"adf09b22dd825db35ac34300d2c6cf4a4b9eb94d7ff013b695717394196ac201"} Nov 29 07:44:45 crc kubenswrapper[4660]: I1129 07:44:45.710906 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0604115a-3f3a-4061-bb63-ada6ebb5d458" path="/var/lib/kubelet/pods/0604115a-3f3a-4061-bb63-ada6ebb5d458/volumes" Nov 29 07:44:45 crc kubenswrapper[4660]: I1129 07:44:45.712189 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a408d44-6909-4748-9b8e-72da66b0afea" path="/var/lib/kubelet/pods/0a408d44-6909-4748-9b8e-72da66b0afea/volumes" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.479394 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-xh277"] Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.481649 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.483962 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.493954 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-xh277"] Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.638307 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.638970 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.639085 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6lg6\" (UniqueName: \"kubernetes.io/projected/39b9a02f-345a-4f54-817c-8a1956e1fde2-kube-api-access-v6lg6\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.639185 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-config\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.639254 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.639342 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.639426 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-svc\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.741688 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-config\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.741760 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.741802 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.741842 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-svc\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.741872 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.741908 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.741954 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6lg6\" (UniqueName: \"kubernetes.io/projected/39b9a02f-345a-4f54-817c-8a1956e1fde2-kube-api-access-v6lg6\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.743120 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.743214 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.743321 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-svc\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.743510 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.744004 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-config\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.744716 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.762646 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6lg6\" (UniqueName: \"kubernetes.io/projected/39b9a02f-345a-4f54-817c-8a1956e1fde2-kube-api-access-v6lg6\") pod \"dnsmasq-dns-d558885bc-xh277\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:46 crc kubenswrapper[4660]: I1129 07:44:46.805491 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:47 crc kubenswrapper[4660]: W1129 07:44:47.292267 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39b9a02f_345a_4f54_817c_8a1956e1fde2.slice/crio-77316728c2b9b6c60fbc6fd0fb7e90028d340e69653ec39f30962664e045480f WatchSource:0}: Error finding container 77316728c2b9b6c60fbc6fd0fb7e90028d340e69653ec39f30962664e045480f: Status 404 returned error can't find the container with id 77316728c2b9b6c60fbc6fd0fb7e90028d340e69653ec39f30962664e045480f Nov 29 07:44:47 crc kubenswrapper[4660]: I1129 07:44:47.301447 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-xh277"] Nov 29 07:44:47 crc kubenswrapper[4660]: I1129 07:44:47.605687 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"147cd78f-2d01-48d5-b43b-eda3532cf537","Type":"ContainerStarted","Data":"6179c773f8fb16d5908a26377caaf396aa663d4e305e00e93f699e937e6bb030"} Nov 29 07:44:47 crc kubenswrapper[4660]: I1129 07:44:47.607488 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-xh277" event={"ID":"39b9a02f-345a-4f54-817c-8a1956e1fde2","Type":"ContainerStarted","Data":"eb7ed719a195a98b50350922b4ea276f28ba4bddef351c1769c1447f02478496"} Nov 29 07:44:47 crc kubenswrapper[4660]: I1129 07:44:47.607529 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-xh277" event={"ID":"39b9a02f-345a-4f54-817c-8a1956e1fde2","Type":"ContainerStarted","Data":"77316728c2b9b6c60fbc6fd0fb7e90028d340e69653ec39f30962664e045480f"} Nov 29 07:44:47 crc kubenswrapper[4660]: I1129 07:44:47.610460 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6","Type":"ContainerStarted","Data":"240bb534d7293cf4da2686ab701d1e298bdcb3c40339d2444931cce17758a141"} Nov 29 07:44:48 crc kubenswrapper[4660]: I1129 07:44:48.620392 4660 generic.go:334] "Generic (PLEG): container finished" podID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerID="eb7ed719a195a98b50350922b4ea276f28ba4bddef351c1769c1447f02478496" exitCode=0 Nov 29 07:44:48 crc kubenswrapper[4660]: I1129 07:44:48.620443 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-xh277" event={"ID":"39b9a02f-345a-4f54-817c-8a1956e1fde2","Type":"ContainerDied","Data":"eb7ed719a195a98b50350922b4ea276f28ba4bddef351c1769c1447f02478496"} Nov 29 07:44:49 crc kubenswrapper[4660]: I1129 07:44:49.630104 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-xh277" event={"ID":"39b9a02f-345a-4f54-817c-8a1956e1fde2","Type":"ContainerStarted","Data":"ad7487609361abc08f884e20e039cdf4a967b105c9753b9fe7b5dee91806e485"} Nov 29 07:44:49 crc kubenswrapper[4660]: I1129 07:44:49.630438 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:49 crc kubenswrapper[4660]: I1129 07:44:49.651262 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-xh277" podStartSLOduration=3.651243542 podStartE2EDuration="3.651243542s" podCreationTimestamp="2025-11-29 07:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:49.64800541 +0000 UTC m=+1780.201535309" watchObservedRunningTime="2025-11-29 07:44:49.651243542 +0000 UTC m=+1780.204773451" Nov 29 07:44:56 crc kubenswrapper[4660]: I1129 07:44:56.693923 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:44:56 crc kubenswrapper[4660]: E1129 07:44:56.694507 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:44:56 crc kubenswrapper[4660]: I1129 07:44:56.807901 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:44:56 crc kubenswrapper[4660]: I1129 07:44:56.873143 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-d7c6d"] Nov 29 07:44:56 crc kubenswrapper[4660]: I1129 07:44:56.873725 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerName="dnsmasq-dns" containerID="cri-o://ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41" gracePeriod=10 Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.062417 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-wl6mp"] Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.065313 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.077749 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-wl6mp"] Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.140751 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.140903 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vp9t\" (UniqueName: \"kubernetes.io/projected/f13d98c7-68bf-4e21-936e-115f586f1dff-kube-api-access-4vp9t\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.140932 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.140976 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.141020 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-config\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.141065 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.141090 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.243044 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vp9t\" (UniqueName: \"kubernetes.io/projected/f13d98c7-68bf-4e21-936e-115f586f1dff-kube-api-access-4vp9t\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.243082 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.243115 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.243146 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-config\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.243183 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.243201 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.243252 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.244030 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.244514 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.245002 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.245505 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-config\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.246032 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.246513 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13d98c7-68bf-4e21-936e-115f586f1dff-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.271921 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vp9t\" (UniqueName: \"kubernetes.io/projected/f13d98c7-68bf-4e21-936e-115f586f1dff-kube-api-access-4vp9t\") pod \"dnsmasq-dns-6b6dc74c5-wl6mp\" (UID: \"f13d98c7-68bf-4e21-936e-115f586f1dff\") " pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.404754 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.512977 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.549244 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-config\") pod \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.549334 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-svc\") pod \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.549450 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkfhl\" (UniqueName: \"kubernetes.io/projected/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-kube-api-access-dkfhl\") pod \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.549529 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-swift-storage-0\") pod \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.549556 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-nb\") pod \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.549632 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-sb\") pod \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\" (UID: \"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3\") " Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.576447 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-kube-api-access-dkfhl" (OuterVolumeSpecName: "kube-api-access-dkfhl") pod "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" (UID: "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3"). InnerVolumeSpecName "kube-api-access-dkfhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.635803 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" (UID: "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.636131 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" (UID: "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.652012 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.652038 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkfhl\" (UniqueName: \"kubernetes.io/projected/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-kube-api-access-dkfhl\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.652097 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.656536 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" (UID: "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.659862 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" (UID: "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.665943 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-config" (OuterVolumeSpecName: "config") pod "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" (UID: "b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.718112 4660 generic.go:334] "Generic (PLEG): container finished" podID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerID="ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41" exitCode=0 Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.718153 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" event={"ID":"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3","Type":"ContainerDied","Data":"ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41"} Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.718179 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" event={"ID":"b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3","Type":"ContainerDied","Data":"849a7376cf4f5d2a2b9d9868e426882670c86f89b4c601904cee752eae827701"} Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.718194 4660 scope.go:117] "RemoveContainer" containerID="ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.718292 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.742479 4660 scope.go:117] "RemoveContainer" containerID="673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.754508 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.754547 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.754562 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.754874 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-d7c6d"] Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.765384 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-d7c6d"] Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.766113 4660 scope.go:117] "RemoveContainer" containerID="ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41" Nov 29 07:44:57 crc kubenswrapper[4660]: E1129 07:44:57.767836 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41\": container with ID starting with ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41 not found: ID does not exist" containerID="ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.767886 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41"} err="failed to get container status \"ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41\": rpc error: code = NotFound desc = could not find container \"ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41\": container with ID starting with ad98af3db9c471c73ca66214ce5c2acb31b839302b128876b00c7e2547d5de41 not found: ID does not exist" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.767916 4660 scope.go:117] "RemoveContainer" containerID="673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67" Nov 29 07:44:57 crc kubenswrapper[4660]: E1129 07:44:57.768259 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67\": container with ID starting with 673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67 not found: ID does not exist" containerID="673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.768307 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67"} err="failed to get container status \"673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67\": rpc error: code = NotFound desc = could not find container \"673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67\": container with ID starting with 673297c15b5298cacd74b02d26db07b0e77d9202804aa82254d10617df191e67 not found: ID does not exist" Nov 29 07:44:57 crc kubenswrapper[4660]: I1129 07:44:57.878195 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-wl6mp"] Nov 29 07:44:57 crc kubenswrapper[4660]: W1129 07:44:57.879036 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf13d98c7_68bf_4e21_936e_115f586f1dff.slice/crio-e55e08500d89a206b873a9f89264a0570807663326c8ddf3803c3caf6fbbaf43 WatchSource:0}: Error finding container e55e08500d89a206b873a9f89264a0570807663326c8ddf3803c3caf6fbbaf43: Status 404 returned error can't find the container with id e55e08500d89a206b873a9f89264a0570807663326c8ddf3803c3caf6fbbaf43 Nov 29 07:44:58 crc kubenswrapper[4660]: I1129 07:44:58.729282 4660 generic.go:334] "Generic (PLEG): container finished" podID="f13d98c7-68bf-4e21-936e-115f586f1dff" containerID="94534ba513c0ad9a313f9987cd86eabfaa1acf17a41367799838bdc7b2a0fc25" exitCode=0 Nov 29 07:44:58 crc kubenswrapper[4660]: I1129 07:44:58.729329 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" event={"ID":"f13d98c7-68bf-4e21-936e-115f586f1dff","Type":"ContainerDied","Data":"94534ba513c0ad9a313f9987cd86eabfaa1acf17a41367799838bdc7b2a0fc25"} Nov 29 07:44:58 crc kubenswrapper[4660]: I1129 07:44:58.729691 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" event={"ID":"f13d98c7-68bf-4e21-936e-115f586f1dff","Type":"ContainerStarted","Data":"e55e08500d89a206b873a9f89264a0570807663326c8ddf3803c3caf6fbbaf43"} Nov 29 07:44:59 crc kubenswrapper[4660]: I1129 07:44:59.711670 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" path="/var/lib/kubelet/pods/b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3/volumes" Nov 29 07:44:59 crc kubenswrapper[4660]: I1129 07:44:59.744400 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" event={"ID":"f13d98c7-68bf-4e21-936e-115f586f1dff","Type":"ContainerStarted","Data":"0570eac12a4857a9c70bff0a236c65377db0349b2bb6ce3a8036756764f0417a"} Nov 29 07:44:59 crc kubenswrapper[4660]: I1129 07:44:59.744583 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:44:59 crc kubenswrapper[4660]: I1129 07:44:59.772965 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" podStartSLOduration=2.772947639 podStartE2EDuration="2.772947639s" podCreationTimestamp="2025-11-29 07:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:44:59.763039757 +0000 UTC m=+1790.316569676" watchObservedRunningTime="2025-11-29 07:44:59.772947639 +0000 UTC m=+1790.326477528" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.145762 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk"] Nov 29 07:45:00 crc kubenswrapper[4660]: E1129 07:45:00.146161 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerName="init" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.146176 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerName="init" Nov 29 07:45:00 crc kubenswrapper[4660]: E1129 07:45:00.146188 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerName="dnsmasq-dns" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.146194 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerName="dnsmasq-dns" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.146368 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerName="dnsmasq-dns" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.147066 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.149905 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.152071 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.230361 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk"] Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.231164 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhrs\" (UniqueName: \"kubernetes.io/projected/1a07cb2d-b206-422c-be7d-1e2952fb7a96-kube-api-access-hdhrs\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.231224 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a07cb2d-b206-422c-be7d-1e2952fb7a96-secret-volume\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.231259 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a07cb2d-b206-422c-be7d-1e2952fb7a96-config-volume\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.333487 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a07cb2d-b206-422c-be7d-1e2952fb7a96-secret-volume\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.333557 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a07cb2d-b206-422c-be7d-1e2952fb7a96-config-volume\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.333900 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdhrs\" (UniqueName: \"kubernetes.io/projected/1a07cb2d-b206-422c-be7d-1e2952fb7a96-kube-api-access-hdhrs\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.337070 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a07cb2d-b206-422c-be7d-1e2952fb7a96-config-volume\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.340777 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a07cb2d-b206-422c-be7d-1e2952fb7a96-secret-volume\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.358952 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdhrs\" (UniqueName: \"kubernetes.io/projected/1a07cb2d-b206-422c-be7d-1e2952fb7a96-kube-api-access-hdhrs\") pod \"collect-profiles-29406705-75gdk\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.468888 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:00 crc kubenswrapper[4660]: I1129 07:45:00.939543 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk"] Nov 29 07:45:01 crc kubenswrapper[4660]: I1129 07:45:01.767578 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" event={"ID":"1a07cb2d-b206-422c-be7d-1e2952fb7a96","Type":"ContainerStarted","Data":"c7490b39df81a89729eb9d73a877a4abee3c03c22e5751b2e41fdb4cffd1dea2"} Nov 29 07:45:01 crc kubenswrapper[4660]: I1129 07:45:01.767970 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" event={"ID":"1a07cb2d-b206-422c-be7d-1e2952fb7a96","Type":"ContainerStarted","Data":"97b71def0edf205e08a54a0dae11664460fe3ce7a4fa34561b77927359b7fe5f"} Nov 29 07:45:01 crc kubenswrapper[4660]: I1129 07:45:01.788748 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" podStartSLOduration=1.788726958 podStartE2EDuration="1.788726958s" podCreationTimestamp="2025-11-29 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:45:01.78684769 +0000 UTC m=+1792.340377599" watchObservedRunningTime="2025-11-29 07:45:01.788726958 +0000 UTC m=+1792.342256857" Nov 29 07:45:02 crc kubenswrapper[4660]: I1129 07:45:02.167801 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-d7c6d" podUID="b8e107af-f2dc-4882-a0a2-cfb7b1caf4b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: i/o timeout" Nov 29 07:45:02 crc kubenswrapper[4660]: I1129 07:45:02.788647 4660 generic.go:334] "Generic (PLEG): container finished" podID="1a07cb2d-b206-422c-be7d-1e2952fb7a96" containerID="c7490b39df81a89729eb9d73a877a4abee3c03c22e5751b2e41fdb4cffd1dea2" exitCode=0 Nov 29 07:45:02 crc kubenswrapper[4660]: I1129 07:45:02.788695 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" event={"ID":"1a07cb2d-b206-422c-be7d-1e2952fb7a96","Type":"ContainerDied","Data":"c7490b39df81a89729eb9d73a877a4abee3c03c22e5751b2e41fdb4cffd1dea2"} Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.106921 4660 scope.go:117] "RemoveContainer" containerID="29c6517a1bbddc90fad8042120d5400394a40f86f0d3783d60b89e95f470b4ff" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.199859 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.326438 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a07cb2d-b206-422c-be7d-1e2952fb7a96-secret-volume\") pod \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.326491 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdhrs\" (UniqueName: \"kubernetes.io/projected/1a07cb2d-b206-422c-be7d-1e2952fb7a96-kube-api-access-hdhrs\") pod \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.326585 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a07cb2d-b206-422c-be7d-1e2952fb7a96-config-volume\") pod \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\" (UID: \"1a07cb2d-b206-422c-be7d-1e2952fb7a96\") " Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.327151 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a07cb2d-b206-422c-be7d-1e2952fb7a96-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a07cb2d-b206-422c-be7d-1e2952fb7a96" (UID: "1a07cb2d-b206-422c-be7d-1e2952fb7a96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.331465 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a07cb2d-b206-422c-be7d-1e2952fb7a96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a07cb2d-b206-422c-be7d-1e2952fb7a96" (UID: "1a07cb2d-b206-422c-be7d-1e2952fb7a96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.332188 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a07cb2d-b206-422c-be7d-1e2952fb7a96-kube-api-access-hdhrs" (OuterVolumeSpecName: "kube-api-access-hdhrs") pod "1a07cb2d-b206-422c-be7d-1e2952fb7a96" (UID: "1a07cb2d-b206-422c-be7d-1e2952fb7a96"). InnerVolumeSpecName "kube-api-access-hdhrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.428362 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdhrs\" (UniqueName: \"kubernetes.io/projected/1a07cb2d-b206-422c-be7d-1e2952fb7a96-kube-api-access-hdhrs\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.428390 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a07cb2d-b206-422c-be7d-1e2952fb7a96-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.428400 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a07cb2d-b206-422c-be7d-1e2952fb7a96-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.805270 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" event={"ID":"1a07cb2d-b206-422c-be7d-1e2952fb7a96","Type":"ContainerDied","Data":"97b71def0edf205e08a54a0dae11664460fe3ce7a4fa34561b77927359b7fe5f"} Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.805570 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b71def0edf205e08a54a0dae11664460fe3ce7a4fa34561b77927359b7fe5f" Nov 29 07:45:04 crc kubenswrapper[4660]: I1129 07:45:04.805333 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk" Nov 29 07:45:07 crc kubenswrapper[4660]: I1129 07:45:07.406747 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-wl6mp" Nov 29 07:45:07 crc kubenswrapper[4660]: I1129 07:45:07.465198 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-xh277"] Nov 29 07:45:07 crc kubenswrapper[4660]: I1129 07:45:07.465776 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-xh277" podUID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerName="dnsmasq-dns" containerID="cri-o://ad7487609361abc08f884e20e039cdf4a967b105c9753b9fe7b5dee91806e485" gracePeriod=10 Nov 29 07:45:07 crc kubenswrapper[4660]: I1129 07:45:07.832595 4660 generic.go:334] "Generic (PLEG): container finished" podID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerID="ad7487609361abc08f884e20e039cdf4a967b105c9753b9fe7b5dee91806e485" exitCode=0 Nov 29 07:45:07 crc kubenswrapper[4660]: I1129 07:45:07.832930 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-xh277" event={"ID":"39b9a02f-345a-4f54-817c-8a1956e1fde2","Type":"ContainerDied","Data":"ad7487609361abc08f884e20e039cdf4a967b105c9753b9fe7b5dee91806e485"} Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.609824 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.718998 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-svc\") pod \"39b9a02f-345a-4f54-817c-8a1956e1fde2\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.719071 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-config\") pod \"39b9a02f-345a-4f54-817c-8a1956e1fde2\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.719129 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-nb\") pod \"39b9a02f-345a-4f54-817c-8a1956e1fde2\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.719178 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-sb\") pod \"39b9a02f-345a-4f54-817c-8a1956e1fde2\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.719208 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-swift-storage-0\") pod \"39b9a02f-345a-4f54-817c-8a1956e1fde2\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.719270 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6lg6\" (UniqueName: \"kubernetes.io/projected/39b9a02f-345a-4f54-817c-8a1956e1fde2-kube-api-access-v6lg6\") pod \"39b9a02f-345a-4f54-817c-8a1956e1fde2\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.719313 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-openstack-edpm-ipam\") pod \"39b9a02f-345a-4f54-817c-8a1956e1fde2\" (UID: \"39b9a02f-345a-4f54-817c-8a1956e1fde2\") " Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.725993 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b9a02f-345a-4f54-817c-8a1956e1fde2-kube-api-access-v6lg6" (OuterVolumeSpecName: "kube-api-access-v6lg6") pod "39b9a02f-345a-4f54-817c-8a1956e1fde2" (UID: "39b9a02f-345a-4f54-817c-8a1956e1fde2"). InnerVolumeSpecName "kube-api-access-v6lg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.769442 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39b9a02f-345a-4f54-817c-8a1956e1fde2" (UID: "39b9a02f-345a-4f54-817c-8a1956e1fde2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.774909 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39b9a02f-345a-4f54-817c-8a1956e1fde2" (UID: "39b9a02f-345a-4f54-817c-8a1956e1fde2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.786249 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-config" (OuterVolumeSpecName: "config") pod "39b9a02f-345a-4f54-817c-8a1956e1fde2" (UID: "39b9a02f-345a-4f54-817c-8a1956e1fde2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.792128 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39b9a02f-345a-4f54-817c-8a1956e1fde2" (UID: "39b9a02f-345a-4f54-817c-8a1956e1fde2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.793970 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "39b9a02f-345a-4f54-817c-8a1956e1fde2" (UID: "39b9a02f-345a-4f54-817c-8a1956e1fde2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.798103 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "39b9a02f-345a-4f54-817c-8a1956e1fde2" (UID: "39b9a02f-345a-4f54-817c-8a1956e1fde2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.822060 4660 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.822109 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-config\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.822123 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.822136 4660 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.823321 4660 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.823348 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6lg6\" (UniqueName: \"kubernetes.io/projected/39b9a02f-345a-4f54-817c-8a1956e1fde2-kube-api-access-v6lg6\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.823359 4660 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/39b9a02f-345a-4f54-817c-8a1956e1fde2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.845031 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-xh277" event={"ID":"39b9a02f-345a-4f54-817c-8a1956e1fde2","Type":"ContainerDied","Data":"77316728c2b9b6c60fbc6fd0fb7e90028d340e69653ec39f30962664e045480f"} Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.845072 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-xh277" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.845088 4660 scope.go:117] "RemoveContainer" containerID="ad7487609361abc08f884e20e039cdf4a967b105c9753b9fe7b5dee91806e485" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.884965 4660 scope.go:117] "RemoveContainer" containerID="eb7ed719a195a98b50350922b4ea276f28ba4bddef351c1769c1447f02478496" Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.891570 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-xh277"] Nov 29 07:45:08 crc kubenswrapper[4660]: I1129 07:45:08.902415 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-xh277"] Nov 29 07:45:09 crc kubenswrapper[4660]: I1129 07:45:09.710245 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b9a02f-345a-4f54-817c-8a1956e1fde2" path="/var/lib/kubelet/pods/39b9a02f-345a-4f54-817c-8a1956e1fde2/volumes" Nov 29 07:45:10 crc kubenswrapper[4660]: I1129 07:45:10.693598 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:45:10 crc kubenswrapper[4660]: E1129 07:45:10.694513 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:45:18 crc kubenswrapper[4660]: I1129 07:45:18.948421 4660 generic.go:334] "Generic (PLEG): container finished" podID="b51d872c-13ff-4e5a-9c3b-dc644c7c19d6" containerID="240bb534d7293cf4da2686ab701d1e298bdcb3c40339d2444931cce17758a141" exitCode=0 Nov 29 07:45:18 crc kubenswrapper[4660]: I1129 07:45:18.948969 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6","Type":"ContainerDied","Data":"240bb534d7293cf4da2686ab701d1e298bdcb3c40339d2444931cce17758a141"} Nov 29 07:45:19 crc kubenswrapper[4660]: I1129 07:45:19.957953 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b51d872c-13ff-4e5a-9c3b-dc644c7c19d6","Type":"ContainerStarted","Data":"03eb17c2c29ea84358b7f3b744ab7e6764dc3937cb1191d4c6101973f88872a6"} Nov 29 07:45:19 crc kubenswrapper[4660]: I1129 07:45:19.958749 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 29 07:45:19 crc kubenswrapper[4660]: I1129 07:45:19.960577 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"147cd78f-2d01-48d5-b43b-eda3532cf537","Type":"ContainerDied","Data":"6179c773f8fb16d5908a26377caaf396aa663d4e305e00e93f699e937e6bb030"} Nov 29 07:45:19 crc kubenswrapper[4660]: I1129 07:45:19.960628 4660 generic.go:334] "Generic (PLEG): container finished" podID="147cd78f-2d01-48d5-b43b-eda3532cf537" containerID="6179c773f8fb16d5908a26377caaf396aa663d4e305e00e93f699e937e6bb030" exitCode=0 Nov 29 07:45:19 crc kubenswrapper[4660]: I1129 07:45:19.993163 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.993142864 podStartE2EDuration="36.993142864s" podCreationTimestamp="2025-11-29 07:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:45:19.987680154 +0000 UTC m=+1810.541210053" watchObservedRunningTime="2025-11-29 07:45:19.993142864 +0000 UTC m=+1810.546672763" Nov 29 07:45:20 crc kubenswrapper[4660]: I1129 07:45:20.984100 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"147cd78f-2d01-48d5-b43b-eda3532cf537","Type":"ContainerStarted","Data":"ac816a6e1d8b10083eee15c91f105f71c744fd706a494cf37f1b9e4b6cc3896c"} Nov 29 07:45:20 crc kubenswrapper[4660]: I1129 07:45:20.984487 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:45:21 crc kubenswrapper[4660]: I1129 07:45:21.012877 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.012854558 podStartE2EDuration="38.012854558s" podCreationTimestamp="2025-11-29 07:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 07:45:21.006008864 +0000 UTC m=+1811.559538763" watchObservedRunningTime="2025-11-29 07:45:21.012854558 +0000 UTC m=+1811.566384457" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.630477 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp"] Nov 29 07:45:25 crc kubenswrapper[4660]: E1129 07:45:25.631421 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerName="init" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.631438 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerName="init" Nov 29 07:45:25 crc kubenswrapper[4660]: E1129 07:45:25.631465 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerName="dnsmasq-dns" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.631471 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerName="dnsmasq-dns" Nov 29 07:45:25 crc kubenswrapper[4660]: E1129 07:45:25.631482 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a07cb2d-b206-422c-be7d-1e2952fb7a96" containerName="collect-profiles" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.631488 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a07cb2d-b206-422c-be7d-1e2952fb7a96" containerName="collect-profiles" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.631683 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a07cb2d-b206-422c-be7d-1e2952fb7a96" containerName="collect-profiles" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.631704 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b9a02f-345a-4f54-817c-8a1956e1fde2" containerName="dnsmasq-dns" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.632352 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.638103 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.638812 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.642352 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.643790 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.653920 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp"] Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.682942 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.683017 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.683110 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.683144 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dssj\" (UniqueName: \"kubernetes.io/projected/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-kube-api-access-5dssj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.694181 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:45:25 crc kubenswrapper[4660]: E1129 07:45:25.694484 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.785476 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.785591 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.785658 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dssj\" (UniqueName: \"kubernetes.io/projected/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-kube-api-access-5dssj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.785917 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.793225 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.793713 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.797413 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.806527 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dssj\" (UniqueName: \"kubernetes.io/projected/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-kube-api-access-5dssj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:25 crc kubenswrapper[4660]: I1129 07:45:25.948478 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:45:26 crc kubenswrapper[4660]: I1129 07:45:26.663753 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:45:26 crc kubenswrapper[4660]: I1129 07:45:26.666254 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp"] Nov 29 07:45:27 crc kubenswrapper[4660]: I1129 07:45:27.056981 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" event={"ID":"f8c2c2ad-2cee-414f-a0df-76351f87c6e0","Type":"ContainerStarted","Data":"54ff32ad09edc12f6ad550c16d3fcb8a393bbb0a73a0cde5e64c9ac20f37d2fb"} Nov 29 07:45:34 crc kubenswrapper[4660]: I1129 07:45:34.238880 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="147cd78f-2d01-48d5-b43b-eda3532cf537" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.208:5671: connect: connection refused" Nov 29 07:45:34 crc kubenswrapper[4660]: I1129 07:45:34.670999 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b51d872c-13ff-4e5a-9c3b-dc644c7c19d6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.209:5671: connect: connection refused" Nov 29 07:45:39 crc kubenswrapper[4660]: I1129 07:45:39.699096 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:45:39 crc kubenswrapper[4660]: E1129 07:45:39.699568 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:45:40 crc kubenswrapper[4660]: E1129 07:45:40.148627 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 29 07:45:40 crc kubenswrapper[4660]: E1129 07:45:40.149194 4660 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 29 07:45:40 crc kubenswrapper[4660]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 29 07:45:40 crc kubenswrapper[4660]: - hosts: all Nov 29 07:45:40 crc kubenswrapper[4660]: strategy: linear Nov 29 07:45:40 crc kubenswrapper[4660]: tasks: Nov 29 07:45:40 crc kubenswrapper[4660]: - name: Enable podified-repos Nov 29 07:45:40 crc kubenswrapper[4660]: become: true Nov 29 07:45:40 crc kubenswrapper[4660]: ansible.builtin.shell: | Nov 29 07:45:40 crc kubenswrapper[4660]: set -euxo pipefail Nov 29 07:45:40 crc kubenswrapper[4660]: pushd /var/tmp Nov 29 07:45:40 crc kubenswrapper[4660]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 29 07:45:40 crc kubenswrapper[4660]: pushd repo-setup-main Nov 29 07:45:40 crc kubenswrapper[4660]: python3 -m venv ./venv Nov 29 07:45:40 crc kubenswrapper[4660]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 29 07:45:40 crc kubenswrapper[4660]: ./venv/bin/repo-setup current-podified -b antelope Nov 29 07:45:40 crc kubenswrapper[4660]: popd Nov 29 07:45:40 crc kubenswrapper[4660]: rm -rf repo-setup-main Nov 29 07:45:40 crc kubenswrapper[4660]: Nov 29 07:45:40 crc kubenswrapper[4660]: Nov 29 07:45:40 crc kubenswrapper[4660]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 29 07:45:40 crc kubenswrapper[4660]: edpm_override_hosts: openstack-edpm-ipam Nov 29 07:45:40 crc kubenswrapper[4660]: edpm_service_type: repo-setup Nov 29 07:45:40 crc kubenswrapper[4660]: Nov 29 07:45:40 crc kubenswrapper[4660]: Nov 29 07:45:40 crc kubenswrapper[4660]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dssj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp_openstack(f8c2c2ad-2cee-414f-a0df-76351f87c6e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 29 07:45:40 crc kubenswrapper[4660]: > logger="UnhandledError" Nov 29 07:45:40 crc kubenswrapper[4660]: E1129 07:45:40.150885 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" podUID="f8c2c2ad-2cee-414f-a0df-76351f87c6e0" Nov 29 07:45:40 crc kubenswrapper[4660]: E1129 07:45:40.176094 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" podUID="f8c2c2ad-2cee-414f-a0df-76351f87c6e0" Nov 29 07:45:44 crc kubenswrapper[4660]: I1129 07:45:44.237850 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 29 07:45:44 crc kubenswrapper[4660]: I1129 07:45:44.671789 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 29 07:45:53 crc kubenswrapper[4660]: I1129 07:45:53.694407 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:45:53 crc kubenswrapper[4660]: E1129 07:45:53.695327 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:46:01 crc kubenswrapper[4660]: I1129 07:46:01.398027 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" event={"ID":"f8c2c2ad-2cee-414f-a0df-76351f87c6e0","Type":"ContainerStarted","Data":"1489607318e3ec1b25fbe99b3ac0f6b32ee9d5a12692958c735b215e09b9a176"} Nov 29 07:46:01 crc kubenswrapper[4660]: I1129 07:46:01.425131 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" podStartSLOduration=3.031706179 podStartE2EDuration="36.425111443s" podCreationTimestamp="2025-11-29 07:45:25 +0000 UTC" firstStartedPulling="2025-11-29 07:45:26.663424853 +0000 UTC m=+1817.216954752" lastFinishedPulling="2025-11-29 07:46:00.056830117 +0000 UTC m=+1850.610360016" observedRunningTime="2025-11-29 07:46:01.420991399 +0000 UTC m=+1851.974521298" watchObservedRunningTime="2025-11-29 07:46:01.425111443 +0000 UTC m=+1851.978641342" Nov 29 07:46:07 crc kubenswrapper[4660]: I1129 07:46:07.695487 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:46:07 crc kubenswrapper[4660]: E1129 07:46:07.696283 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:46:12 crc kubenswrapper[4660]: I1129 07:46:12.519721 4660 generic.go:334] "Generic (PLEG): container finished" podID="f8c2c2ad-2cee-414f-a0df-76351f87c6e0" containerID="1489607318e3ec1b25fbe99b3ac0f6b32ee9d5a12692958c735b215e09b9a176" exitCode=0 Nov 29 07:46:12 crc kubenswrapper[4660]: I1129 07:46:12.519912 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" event={"ID":"f8c2c2ad-2cee-414f-a0df-76351f87c6e0","Type":"ContainerDied","Data":"1489607318e3ec1b25fbe99b3ac0f6b32ee9d5a12692958c735b215e09b9a176"} Nov 29 07:46:13 crc kubenswrapper[4660]: I1129 07:46:13.954851 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.033768 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-inventory\") pod \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.033863 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dssj\" (UniqueName: \"kubernetes.io/projected/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-kube-api-access-5dssj\") pod \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.033920 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-repo-setup-combined-ca-bundle\") pod \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.033960 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-ssh-key\") pod \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\" (UID: \"f8c2c2ad-2cee-414f-a0df-76351f87c6e0\") " Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.045966 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f8c2c2ad-2cee-414f-a0df-76351f87c6e0" (UID: "f8c2c2ad-2cee-414f-a0df-76351f87c6e0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.054811 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-kube-api-access-5dssj" (OuterVolumeSpecName: "kube-api-access-5dssj") pod "f8c2c2ad-2cee-414f-a0df-76351f87c6e0" (UID: "f8c2c2ad-2cee-414f-a0df-76351f87c6e0"). InnerVolumeSpecName "kube-api-access-5dssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.065269 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-inventory" (OuterVolumeSpecName: "inventory") pod "f8c2c2ad-2cee-414f-a0df-76351f87c6e0" (UID: "f8c2c2ad-2cee-414f-a0df-76351f87c6e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.075742 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f8c2c2ad-2cee-414f-a0df-76351f87c6e0" (UID: "f8c2c2ad-2cee-414f-a0df-76351f87c6e0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.140591 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.140668 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dssj\" (UniqueName: \"kubernetes.io/projected/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-kube-api-access-5dssj\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.140685 4660 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.140694 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8c2c2ad-2cee-414f-a0df-76351f87c6e0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.538532 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" event={"ID":"f8c2c2ad-2cee-414f-a0df-76351f87c6e0","Type":"ContainerDied","Data":"54ff32ad09edc12f6ad550c16d3fcb8a393bbb0a73a0cde5e64c9ac20f37d2fb"} Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.538781 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ff32ad09edc12f6ad550c16d3fcb8a393bbb0a73a0cde5e64c9ac20f37d2fb" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.538678 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.617977 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm"] Nov 29 07:46:14 crc kubenswrapper[4660]: E1129 07:46:14.618439 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c2c2ad-2cee-414f-a0df-76351f87c6e0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.618462 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c2c2ad-2cee-414f-a0df-76351f87c6e0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.618700 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c2c2ad-2cee-414f-a0df-76351f87c6e0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.619491 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.622310 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.622963 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.623142 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.623728 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.632835 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm"] Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.651128 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zjr\" (UniqueName: \"kubernetes.io/projected/7917f022-eed4-4622-a10e-82a72f068b29-kube-api-access-55zjr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.651240 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.651299 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.753644 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.753802 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zjr\" (UniqueName: \"kubernetes.io/projected/7917f022-eed4-4622-a10e-82a72f068b29-kube-api-access-55zjr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.753911 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.759283 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.760291 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.777569 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zjr\" (UniqueName: \"kubernetes.io/projected/7917f022-eed4-4622-a10e-82a72f068b29-kube-api-access-55zjr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-497rm\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:14 crc kubenswrapper[4660]: I1129 07:46:14.937204 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:15 crc kubenswrapper[4660]: I1129 07:46:15.542883 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm"] Nov 29 07:46:15 crc kubenswrapper[4660]: W1129 07:46:15.544541 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7917f022_eed4_4622_a10e_82a72f068b29.slice/crio-21241d0048038b967217a1ca978ad9600a3e052d03d30ed9120f73adc5ce8d59 WatchSource:0}: Error finding container 21241d0048038b967217a1ca978ad9600a3e052d03d30ed9120f73adc5ce8d59: Status 404 returned error can't find the container with id 21241d0048038b967217a1ca978ad9600a3e052d03d30ed9120f73adc5ce8d59 Nov 29 07:46:16 crc kubenswrapper[4660]: I1129 07:46:16.562142 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" event={"ID":"7917f022-eed4-4622-a10e-82a72f068b29","Type":"ContainerStarted","Data":"d12752c609f4a56a12d29d5413bf097945d0fd161decaacd575a2006e2bcef5d"} Nov 29 07:46:16 crc kubenswrapper[4660]: I1129 07:46:16.562650 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" event={"ID":"7917f022-eed4-4622-a10e-82a72f068b29","Type":"ContainerStarted","Data":"21241d0048038b967217a1ca978ad9600a3e052d03d30ed9120f73adc5ce8d59"} Nov 29 07:46:16 crc kubenswrapper[4660]: I1129 07:46:16.580298 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" podStartSLOduration=2.080634243 podStartE2EDuration="2.58028165s" podCreationTimestamp="2025-11-29 07:46:14 +0000 UTC" firstStartedPulling="2025-11-29 07:46:15.550808787 +0000 UTC m=+1866.104338686" lastFinishedPulling="2025-11-29 07:46:16.050456194 +0000 UTC m=+1866.603986093" observedRunningTime="2025-11-29 07:46:16.577247833 +0000 UTC m=+1867.130777722" watchObservedRunningTime="2025-11-29 07:46:16.58028165 +0000 UTC m=+1867.133811549" Nov 29 07:46:19 crc kubenswrapper[4660]: I1129 07:46:19.593688 4660 generic.go:334] "Generic (PLEG): container finished" podID="7917f022-eed4-4622-a10e-82a72f068b29" containerID="d12752c609f4a56a12d29d5413bf097945d0fd161decaacd575a2006e2bcef5d" exitCode=0 Nov 29 07:46:19 crc kubenswrapper[4660]: I1129 07:46:19.593793 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" event={"ID":"7917f022-eed4-4622-a10e-82a72f068b29","Type":"ContainerDied","Data":"d12752c609f4a56a12d29d5413bf097945d0fd161decaacd575a2006e2bcef5d"} Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.036984 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.088368 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-inventory\") pod \"7917f022-eed4-4622-a10e-82a72f068b29\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.088498 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-ssh-key\") pod \"7917f022-eed4-4622-a10e-82a72f068b29\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.088712 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55zjr\" (UniqueName: \"kubernetes.io/projected/7917f022-eed4-4622-a10e-82a72f068b29-kube-api-access-55zjr\") pod \"7917f022-eed4-4622-a10e-82a72f068b29\" (UID: \"7917f022-eed4-4622-a10e-82a72f068b29\") " Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.110097 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7917f022-eed4-4622-a10e-82a72f068b29-kube-api-access-55zjr" (OuterVolumeSpecName: "kube-api-access-55zjr") pod "7917f022-eed4-4622-a10e-82a72f068b29" (UID: "7917f022-eed4-4622-a10e-82a72f068b29"). InnerVolumeSpecName "kube-api-access-55zjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.120918 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-inventory" (OuterVolumeSpecName: "inventory") pod "7917f022-eed4-4622-a10e-82a72f068b29" (UID: "7917f022-eed4-4622-a10e-82a72f068b29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.129792 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7917f022-eed4-4622-a10e-82a72f068b29" (UID: "7917f022-eed4-4622-a10e-82a72f068b29"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.191455 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55zjr\" (UniqueName: \"kubernetes.io/projected/7917f022-eed4-4622-a10e-82a72f068b29-kube-api-access-55zjr\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.191503 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.191516 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7917f022-eed4-4622-a10e-82a72f068b29-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.617356 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" event={"ID":"7917f022-eed4-4622-a10e-82a72f068b29","Type":"ContainerDied","Data":"21241d0048038b967217a1ca978ad9600a3e052d03d30ed9120f73adc5ce8d59"} Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.617399 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21241d0048038b967217a1ca978ad9600a3e052d03d30ed9120f73adc5ce8d59" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.617473 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-497rm" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.693833 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:46:21 crc kubenswrapper[4660]: E1129 07:46:21.694475 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.704516 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh"] Nov 29 07:46:21 crc kubenswrapper[4660]: E1129 07:46:21.705136 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7917f022-eed4-4622-a10e-82a72f068b29" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.705226 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7917f022-eed4-4622-a10e-82a72f068b29" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.705527 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="7917f022-eed4-4622-a10e-82a72f068b29" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.706433 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.710261 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.715989 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh"] Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.717113 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.717318 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.718232 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.802760 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.803100 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6pn\" (UniqueName: \"kubernetes.io/projected/92f06c4a-45f4-4542-b502-210d08515f70-kube-api-access-tc6pn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.803208 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.803325 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.905200 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.905311 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.905376 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.905456 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6pn\" (UniqueName: \"kubernetes.io/projected/92f06c4a-45f4-4542-b502-210d08515f70-kube-api-access-tc6pn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.909255 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.909811 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.911948 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:21 crc kubenswrapper[4660]: I1129 07:46:21.929400 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6pn\" (UniqueName: \"kubernetes.io/projected/92f06c4a-45f4-4542-b502-210d08515f70-kube-api-access-tc6pn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:22 crc kubenswrapper[4660]: I1129 07:46:22.046249 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:46:22 crc kubenswrapper[4660]: I1129 07:46:22.538861 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh"] Nov 29 07:46:22 crc kubenswrapper[4660]: I1129 07:46:22.627127 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" event={"ID":"92f06c4a-45f4-4542-b502-210d08515f70","Type":"ContainerStarted","Data":"c42a2533d492d2acb43c6c749190609a8f91bb9b818037aef0a0b69bcd612668"} Nov 29 07:46:23 crc kubenswrapper[4660]: I1129 07:46:23.636543 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" event={"ID":"92f06c4a-45f4-4542-b502-210d08515f70","Type":"ContainerStarted","Data":"39069d980379bda19af4551ac22c98581b30f6842fe85cc2a357ecd3cc13879f"} Nov 29 07:46:23 crc kubenswrapper[4660]: I1129 07:46:23.659655 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" podStartSLOduration=2.2252374440000002 podStartE2EDuration="2.65958383s" podCreationTimestamp="2025-11-29 07:46:21 +0000 UTC" firstStartedPulling="2025-11-29 07:46:22.551838774 +0000 UTC m=+1873.105368673" lastFinishedPulling="2025-11-29 07:46:22.98618516 +0000 UTC m=+1873.539715059" observedRunningTime="2025-11-29 07:46:23.653141445 +0000 UTC m=+1874.206671344" watchObservedRunningTime="2025-11-29 07:46:23.65958383 +0000 UTC m=+1874.213113769" Nov 29 07:46:32 crc kubenswrapper[4660]: I1129 07:46:32.693833 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:46:32 crc kubenswrapper[4660]: E1129 07:46:32.694584 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:46:45 crc kubenswrapper[4660]: I1129 07:46:45.694311 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:46:45 crc kubenswrapper[4660]: E1129 07:46:45.695194 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:47:00 crc kubenswrapper[4660]: I1129 07:47:00.693857 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:47:00 crc kubenswrapper[4660]: E1129 07:47:00.694676 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:47:11 crc kubenswrapper[4660]: I1129 07:47:11.693162 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:47:13 crc kubenswrapper[4660]: I1129 07:47:13.145343 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"8ed2a620d981176cdc166494c87463a16e5568b6b9687983bde31b6ca61071a5"} Nov 29 07:47:17 crc kubenswrapper[4660]: I1129 07:47:17.049962 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4352-account-create-update-l6jvk"] Nov 29 07:47:17 crc kubenswrapper[4660]: I1129 07:47:17.069041 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7b98-account-create-update-jxkfn"] Nov 29 07:47:17 crc kubenswrapper[4660]: I1129 07:47:17.081540 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4352-account-create-update-l6jvk"] Nov 29 07:47:17 crc kubenswrapper[4660]: I1129 07:47:17.090405 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7b98-account-create-update-jxkfn"] Nov 29 07:47:17 crc kubenswrapper[4660]: I1129 07:47:17.705239 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483004e9-a9b0-4ea7-96b4-a7aaed456ac7" path="/var/lib/kubelet/pods/483004e9-a9b0-4ea7-96b4-a7aaed456ac7/volumes" Nov 29 07:47:17 crc kubenswrapper[4660]: I1129 07:47:17.706136 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61209158-65d1-44a1-84bb-3b2f98b5566f" path="/var/lib/kubelet/pods/61209158-65d1-44a1-84bb-3b2f98b5566f/volumes" Nov 29 07:47:18 crc kubenswrapper[4660]: I1129 07:47:18.039360 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5sg7b"] Nov 29 07:47:18 crc kubenswrapper[4660]: I1129 07:47:18.062875 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b9jkx"] Nov 29 07:47:18 crc kubenswrapper[4660]: I1129 07:47:18.074941 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5sg7b"] Nov 29 07:47:18 crc kubenswrapper[4660]: I1129 07:47:18.083848 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b9jkx"] Nov 29 07:47:19 crc kubenswrapper[4660]: I1129 07:47:19.703506 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ae65a4-7c1b-41bb-b242-9229ddaa0e6b" path="/var/lib/kubelet/pods/48ae65a4-7c1b-41bb-b242-9229ddaa0e6b/volumes" Nov 29 07:47:19 crc kubenswrapper[4660]: I1129 07:47:19.704097 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bdc64a4-27cd-4937-8d01-9e2742b75db5" path="/var/lib/kubelet/pods/7bdc64a4-27cd-4937-8d01-9e2742b75db5/volumes" Nov 29 07:47:22 crc kubenswrapper[4660]: I1129 07:47:22.035741 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-p8r6k"] Nov 29 07:47:22 crc kubenswrapper[4660]: I1129 07:47:22.045322 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-p8r6k"] Nov 29 07:47:23 crc kubenswrapper[4660]: I1129 07:47:23.032946 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b63f-account-create-update-qh5x7"] Nov 29 07:47:23 crc kubenswrapper[4660]: I1129 07:47:23.042506 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b63f-account-create-update-qh5x7"] Nov 29 07:47:23 crc kubenswrapper[4660]: I1129 07:47:23.707253 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff6cea5-6bff-44ad-b843-fb7572478416" path="/var/lib/kubelet/pods/1ff6cea5-6bff-44ad-b843-fb7572478416/volumes" Nov 29 07:47:23 crc kubenswrapper[4660]: I1129 07:47:23.708180 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c1811e-310d-4d94-8a31-14421da00093" path="/var/lib/kubelet/pods/a2c1811e-310d-4d94-8a31-14421da00093/volumes" Nov 29 07:48:04 crc kubenswrapper[4660]: I1129 07:48:04.299743 4660 scope.go:117] "RemoveContainer" containerID="3f64171e907bb072da9567e9ad8774eda9920295192fe0fde3a6a2e2f0ba6780" Nov 29 07:48:04 crc kubenswrapper[4660]: I1129 07:48:04.344869 4660 scope.go:117] "RemoveContainer" containerID="6096b3c9fff58e366d196e93530f4deaa33ffcacdfa56f1bab32fb7dbedc8c72" Nov 29 07:48:04 crc kubenswrapper[4660]: I1129 07:48:04.512869 4660 scope.go:117] "RemoveContainer" containerID="74e1fcf3cb91dcdbeb197346d7b1095c1ea6c2cec35af3d8a7a81454bf52e551" Nov 29 07:48:04 crc kubenswrapper[4660]: I1129 07:48:04.587353 4660 scope.go:117] "RemoveContainer" containerID="79b82c321d3981911b4e5f46791675e2fad11eef618fddabb1e59981332c4b91" Nov 29 07:48:04 crc kubenswrapper[4660]: I1129 07:48:04.638866 4660 scope.go:117] "RemoveContainer" containerID="6a0986934544ead13faf262aca052f7033da16729893cce239036ff933eb52d7" Nov 29 07:48:04 crc kubenswrapper[4660]: I1129 07:48:04.685749 4660 scope.go:117] "RemoveContainer" containerID="3f394f02b632c5f493909426cd0ed07d9ab249959bc717e0d0dce5ee21fd8a8d" Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.073728 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mpdjp"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.083190 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4d27-account-create-update-sbsxh"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.094378 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-fm77f"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.103959 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5ac6-account-create-update-sqpwx"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.118425 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mpdjp"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.126554 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4d27-account-create-update-sbsxh"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.134384 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5ac6-account-create-update-sqpwx"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.147242 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-fm77f"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.159558 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dwvvj"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.168730 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-99db-account-create-update-6xmmv"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.179282 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-99db-account-create-update-6xmmv"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.188233 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dwvvj"] Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.706156 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c" path="/var/lib/kubelet/pods/1fb6ebf7-7e64-4470-b468-2ce0f1a0bd8c/volumes" Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.707143 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68cb8551-52db-4811-ab40-fa77d412662a" path="/var/lib/kubelet/pods/68cb8551-52db-4811-ab40-fa77d412662a/volumes" Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.707971 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85987bf0-2e03-4db1-a740-0184d42b2540" path="/var/lib/kubelet/pods/85987bf0-2e03-4db1-a740-0184d42b2540/volumes" Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.708837 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1db5943-0f0c-4b0f-827d-297cf4773210" path="/var/lib/kubelet/pods/b1db5943-0f0c-4b0f-827d-297cf4773210/volumes" Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.710365 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5baff9b-19db-4ae1-b016-dce7f4f84f1a" path="/var/lib/kubelet/pods/c5baff9b-19db-4ae1-b016-dce7f4f84f1a/volumes" Nov 29 07:48:19 crc kubenswrapper[4660]: I1129 07:48:19.712560 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7173830-627e-4f39-b843-5ced3d7b5efa" path="/var/lib/kubelet/pods/e7173830-627e-4f39-b843-5ced3d7b5efa/volumes" Nov 29 07:48:46 crc kubenswrapper[4660]: I1129 07:48:46.044536 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vnbp6"] Nov 29 07:48:46 crc kubenswrapper[4660]: I1129 07:48:46.056232 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vnbp6"] Nov 29 07:48:47 crc kubenswrapper[4660]: I1129 07:48:47.707924 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e6819d-5ed0-4388-a7da-8d22e20ad10c" path="/var/lib/kubelet/pods/d1e6819d-5ed0-4388-a7da-8d22e20ad10c/volumes" Nov 29 07:49:04 crc kubenswrapper[4660]: I1129 07:49:04.876718 4660 scope.go:117] "RemoveContainer" containerID="873d51bc11ae1e02f259575ba5348581b1ed4c53753ab5429caaa068bb45c059" Nov 29 07:49:04 crc kubenswrapper[4660]: I1129 07:49:04.904913 4660 scope.go:117] "RemoveContainer" containerID="e47b38093f45d74b42130e1e15cbe990d777fb72cece1fad7f8cf16880af8ea2" Nov 29 07:49:04 crc kubenswrapper[4660]: I1129 07:49:04.950022 4660 scope.go:117] "RemoveContainer" containerID="959dc84fbe8479b5287bb18d41813a7ac01e1627a19dc4cd1be3f0f5f6d49f36" Nov 29 07:49:04 crc kubenswrapper[4660]: I1129 07:49:04.984286 4660 scope.go:117] "RemoveContainer" containerID="e947b4446c70333c504ff836efad20d9ae39a6d5a04e55f81a97c909e5897f20" Nov 29 07:49:05 crc kubenswrapper[4660]: I1129 07:49:05.017116 4660 scope.go:117] "RemoveContainer" containerID="fdb87d30d273f9ce0caa15fe2cbeb98854483dfc1c6edafc9c01d44589349a7e" Nov 29 07:49:05 crc kubenswrapper[4660]: I1129 07:49:05.059688 4660 scope.go:117] "RemoveContainer" containerID="f2a60c3558d4aa3df89f3c7ac83324c6e746107500b954b29e6b569d1ea5aa5f" Nov 29 07:49:05 crc kubenswrapper[4660]: I1129 07:49:05.105937 4660 scope.go:117] "RemoveContainer" containerID="ea923b87a60c042407e01907eb13ad4896ad2126a7d19a327bb86f8d6f016d1f" Nov 29 07:49:05 crc kubenswrapper[4660]: I1129 07:49:05.146647 4660 scope.go:117] "RemoveContainer" containerID="897fc2d69e05d4e9fad535d2c9567897e246ca10506442de808f635e602e2616" Nov 29 07:49:05 crc kubenswrapper[4660]: I1129 07:49:05.165875 4660 scope.go:117] "RemoveContainer" containerID="52bfb9ff5e3725a9d18e8ab5cbda7ef807136ab8ed06a0b1c4ce534ff4101d56" Nov 29 07:49:35 crc kubenswrapper[4660]: I1129 07:49:35.499856 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:49:35 crc kubenswrapper[4660]: I1129 07:49:35.500353 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.531281 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sgj7t"] Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.535937 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.555397 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgj7t"] Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.620732 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-utilities\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.620911 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48rg6\" (UniqueName: \"kubernetes.io/projected/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-kube-api-access-48rg6\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.620959 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-catalog-content\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.722602 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48rg6\" (UniqueName: \"kubernetes.io/projected/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-kube-api-access-48rg6\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.722669 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-catalog-content\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.722740 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-utilities\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.723142 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-utilities\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.723279 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-catalog-content\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.741167 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48rg6\" (UniqueName: \"kubernetes.io/projected/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-kube-api-access-48rg6\") pod \"redhat-operators-sgj7t\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:49:59 crc kubenswrapper[4660]: I1129 07:49:59.859092 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:50:00 crc kubenswrapper[4660]: I1129 07:50:00.402777 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgj7t"] Nov 29 07:50:00 crc kubenswrapper[4660]: I1129 07:50:00.790951 4660 generic.go:334] "Generic (PLEG): container finished" podID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerID="ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217" exitCode=0 Nov 29 07:50:00 crc kubenswrapper[4660]: I1129 07:50:00.791057 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgj7t" event={"ID":"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4","Type":"ContainerDied","Data":"ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217"} Nov 29 07:50:00 crc kubenswrapper[4660]: I1129 07:50:00.791230 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgj7t" event={"ID":"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4","Type":"ContainerStarted","Data":"0364eb04d90876906577a157ef94c25521ac501d5064779ae0687dbbbb556a22"} Nov 29 07:50:02 crc kubenswrapper[4660]: I1129 07:50:02.808168 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgj7t" event={"ID":"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4","Type":"ContainerStarted","Data":"bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d"} Nov 29 07:50:05 crc kubenswrapper[4660]: I1129 07:50:05.500424 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:50:05 crc kubenswrapper[4660]: I1129 07:50:05.500869 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:50:06 crc kubenswrapper[4660]: I1129 07:50:06.052298 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wgw94"] Nov 29 07:50:06 crc kubenswrapper[4660]: I1129 07:50:06.064283 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wgw94"] Nov 29 07:50:06 crc kubenswrapper[4660]: I1129 07:50:06.842475 4660 generic.go:334] "Generic (PLEG): container finished" podID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerID="bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d" exitCode=0 Nov 29 07:50:06 crc kubenswrapper[4660]: I1129 07:50:06.842520 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgj7t" event={"ID":"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4","Type":"ContainerDied","Data":"bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d"} Nov 29 07:50:07 crc kubenswrapper[4660]: I1129 07:50:07.713404 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2de8145-fcb6-4202-8aa4-85b696c9e69c" path="/var/lib/kubelet/pods/b2de8145-fcb6-4202-8aa4-85b696c9e69c/volumes" Nov 29 07:50:07 crc kubenswrapper[4660]: I1129 07:50:07.854869 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgj7t" event={"ID":"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4","Type":"ContainerStarted","Data":"ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95"} Nov 29 07:50:07 crc kubenswrapper[4660]: I1129 07:50:07.882023 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sgj7t" podStartSLOduration=2.429288312 podStartE2EDuration="8.882006857s" podCreationTimestamp="2025-11-29 07:49:59 +0000 UTC" firstStartedPulling="2025-11-29 07:50:00.792169565 +0000 UTC m=+2091.345699464" lastFinishedPulling="2025-11-29 07:50:07.24488811 +0000 UTC m=+2097.798418009" observedRunningTime="2025-11-29 07:50:07.875031048 +0000 UTC m=+2098.428560947" watchObservedRunningTime="2025-11-29 07:50:07.882006857 +0000 UTC m=+2098.435536756" Nov 29 07:50:09 crc kubenswrapper[4660]: I1129 07:50:09.859501 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:50:09 crc kubenswrapper[4660]: I1129 07:50:09.859859 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:50:10 crc kubenswrapper[4660]: I1129 07:50:10.915783 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sgj7t" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="registry-server" probeResult="failure" output=< Nov 29 07:50:10 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:50:10 crc kubenswrapper[4660]: > Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.073325 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wcwr9"] Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.083111 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7j498"] Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.092812 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wcwr9"] Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.106362 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7j498"] Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.710818 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f9cecd-58f3-4e48-acfc-6de8cce380df" path="/var/lib/kubelet/pods/07f9cecd-58f3-4e48-acfc-6de8cce380df/volumes" Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.712976 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34f6bca-d788-40bf-9065-f7f331a8f8d9" path="/var/lib/kubelet/pods/e34f6bca-d788-40bf-9065-f7f331a8f8d9/volumes" Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.912437 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:50:19 crc kubenswrapper[4660]: I1129 07:50:19.983519 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:50:20 crc kubenswrapper[4660]: I1129 07:50:20.151588 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgj7t"] Nov 29 07:50:20 crc kubenswrapper[4660]: I1129 07:50:20.973540 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sgj7t" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="registry-server" containerID="cri-o://ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95" gracePeriod=2 Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.034244 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gcb27"] Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.046706 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gcb27"] Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.429641 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.602589 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48rg6\" (UniqueName: \"kubernetes.io/projected/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-kube-api-access-48rg6\") pod \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.602695 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-utilities\") pod \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.602888 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-catalog-content\") pod \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\" (UID: \"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4\") " Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.604851 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-utilities" (OuterVolumeSpecName: "utilities") pod "c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" (UID: "c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.609432 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-kube-api-access-48rg6" (OuterVolumeSpecName: "kube-api-access-48rg6") pod "c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" (UID: "c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4"). InnerVolumeSpecName "kube-api-access-48rg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.705312 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4ba5b4-3360-458f-8de9-6c0630ad7cbf" path="/var/lib/kubelet/pods/0a4ba5b4-3360-458f-8de9-6c0630ad7cbf/volumes" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.705360 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.705396 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48rg6\" (UniqueName: \"kubernetes.io/projected/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-kube-api-access-48rg6\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.754015 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" (UID: "c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.807474 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.986879 4660 generic.go:334] "Generic (PLEG): container finished" podID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerID="ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95" exitCode=0 Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.987005 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgj7t" event={"ID":"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4","Type":"ContainerDied","Data":"ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95"} Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.987033 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgj7t" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.987496 4660 scope.go:117] "RemoveContainer" containerID="ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95" Nov 29 07:50:21 crc kubenswrapper[4660]: I1129 07:50:21.989760 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgj7t" event={"ID":"c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4","Type":"ContainerDied","Data":"0364eb04d90876906577a157ef94c25521ac501d5064779ae0687dbbbb556a22"} Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.038321 4660 scope.go:117] "RemoveContainer" containerID="bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d" Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.043719 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgj7t"] Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.051383 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sgj7t"] Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.069230 4660 scope.go:117] "RemoveContainer" containerID="ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217" Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.110176 4660 scope.go:117] "RemoveContainer" containerID="ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95" Nov 29 07:50:22 crc kubenswrapper[4660]: E1129 07:50:22.110861 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95\": container with ID starting with ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95 not found: ID does not exist" containerID="ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95" Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.110911 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95"} err="failed to get container status \"ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95\": rpc error: code = NotFound desc = could not find container \"ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95\": container with ID starting with ddda34ce9db523ca75526ae4797fea0c6372f4ca51b1c8f067e67abe2b725e95 not found: ID does not exist" Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.110941 4660 scope.go:117] "RemoveContainer" containerID="bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d" Nov 29 07:50:22 crc kubenswrapper[4660]: E1129 07:50:22.111353 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d\": container with ID starting with bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d not found: ID does not exist" containerID="bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d" Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.111418 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d"} err="failed to get container status \"bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d\": rpc error: code = NotFound desc = could not find container \"bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d\": container with ID starting with bc8aa7016dfc4716142d790ac1bc40d95878ed9ed362ebb5a8fff50eb50c2f0d not found: ID does not exist" Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.111447 4660 scope.go:117] "RemoveContainer" containerID="ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217" Nov 29 07:50:22 crc kubenswrapper[4660]: E1129 07:50:22.111816 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217\": container with ID starting with ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217 not found: ID does not exist" containerID="ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217" Nov 29 07:50:22 crc kubenswrapper[4660]: I1129 07:50:22.111841 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217"} err="failed to get container status \"ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217\": rpc error: code = NotFound desc = could not find container \"ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217\": container with ID starting with ba544a1a66b53b202ba409dc7221e39397d9b3f95a34d3ba00d1d267fa9f1217 not found: ID does not exist" Nov 29 07:50:23 crc kubenswrapper[4660]: I1129 07:50:23.707861 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" path="/var/lib/kubelet/pods/c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4/volumes" Nov 29 07:50:27 crc kubenswrapper[4660]: I1129 07:50:27.035040 4660 generic.go:334] "Generic (PLEG): container finished" podID="92f06c4a-45f4-4542-b502-210d08515f70" containerID="39069d980379bda19af4551ac22c98581b30f6842fe85cc2a357ecd3cc13879f" exitCode=0 Nov 29 07:50:27 crc kubenswrapper[4660]: I1129 07:50:27.035147 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" event={"ID":"92f06c4a-45f4-4542-b502-210d08515f70","Type":"ContainerDied","Data":"39069d980379bda19af4551ac22c98581b30f6842fe85cc2a357ecd3cc13879f"} Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.517423 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.641104 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-inventory\") pod \"92f06c4a-45f4-4542-b502-210d08515f70\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.641214 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-bootstrap-combined-ca-bundle\") pod \"92f06c4a-45f4-4542-b502-210d08515f70\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.641349 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc6pn\" (UniqueName: \"kubernetes.io/projected/92f06c4a-45f4-4542-b502-210d08515f70-kube-api-access-tc6pn\") pod \"92f06c4a-45f4-4542-b502-210d08515f70\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.641423 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-ssh-key\") pod \"92f06c4a-45f4-4542-b502-210d08515f70\" (UID: \"92f06c4a-45f4-4542-b502-210d08515f70\") " Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.647050 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f06c4a-45f4-4542-b502-210d08515f70-kube-api-access-tc6pn" (OuterVolumeSpecName: "kube-api-access-tc6pn") pod "92f06c4a-45f4-4542-b502-210d08515f70" (UID: "92f06c4a-45f4-4542-b502-210d08515f70"). InnerVolumeSpecName "kube-api-access-tc6pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.648182 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "92f06c4a-45f4-4542-b502-210d08515f70" (UID: "92f06c4a-45f4-4542-b502-210d08515f70"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.670550 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "92f06c4a-45f4-4542-b502-210d08515f70" (UID: "92f06c4a-45f4-4542-b502-210d08515f70"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.671998 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-inventory" (OuterVolumeSpecName: "inventory") pod "92f06c4a-45f4-4542-b502-210d08515f70" (UID: "92f06c4a-45f4-4542-b502-210d08515f70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.744054 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc6pn\" (UniqueName: \"kubernetes.io/projected/92f06c4a-45f4-4542-b502-210d08515f70-kube-api-access-tc6pn\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.744306 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.744369 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:28 crc kubenswrapper[4660]: I1129 07:50:28.744423 4660 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f06c4a-45f4-4542-b502-210d08515f70-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.055361 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" event={"ID":"92f06c4a-45f4-4542-b502-210d08515f70","Type":"ContainerDied","Data":"c42a2533d492d2acb43c6c749190609a8f91bb9b818037aef0a0b69bcd612668"} Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.055412 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42a2533d492d2acb43c6c749190609a8f91bb9b818037aef0a0b69bcd612668" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.055439 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.137051 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t"] Nov 29 07:50:29 crc kubenswrapper[4660]: E1129 07:50:29.137496 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="extract-utilities" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.137521 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="extract-utilities" Nov 29 07:50:29 crc kubenswrapper[4660]: E1129 07:50:29.137541 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="extract-content" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.137550 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="extract-content" Nov 29 07:50:29 crc kubenswrapper[4660]: E1129 07:50:29.137577 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f06c4a-45f4-4542-b502-210d08515f70" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.137588 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f06c4a-45f4-4542-b502-210d08515f70" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:50:29 crc kubenswrapper[4660]: E1129 07:50:29.137645 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="registry-server" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.137654 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="registry-server" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.137889 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b78ea2-2bf5-4ffc-ad4a-476b927b45c4" containerName="registry-server" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.137927 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f06c4a-45f4-4542-b502-210d08515f70" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.138741 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.141576 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.141882 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.141929 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.142116 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.153774 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t"] Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.253216 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr6gp\" (UniqueName: \"kubernetes.io/projected/c2698bcc-7e72-4b53-8bbf-9d71b4720148-kube-api-access-hr6gp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.253287 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.253320 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.487782 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr6gp\" (UniqueName: \"kubernetes.io/projected/c2698bcc-7e72-4b53-8bbf-9d71b4720148-kube-api-access-hr6gp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.487852 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.487885 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.491660 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.510585 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.521648 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr6gp\" (UniqueName: \"kubernetes.io/projected/c2698bcc-7e72-4b53-8bbf-9d71b4720148-kube-api-access-hr6gp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5895t\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:29 crc kubenswrapper[4660]: I1129 07:50:29.754461 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:50:30 crc kubenswrapper[4660]: I1129 07:50:30.253465 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t"] Nov 29 07:50:30 crc kubenswrapper[4660]: I1129 07:50:30.271239 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:50:31 crc kubenswrapper[4660]: I1129 07:50:31.028321 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-vxrqr"] Nov 29 07:50:31 crc kubenswrapper[4660]: I1129 07:50:31.037672 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-vxrqr"] Nov 29 07:50:31 crc kubenswrapper[4660]: I1129 07:50:31.071308 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" event={"ID":"c2698bcc-7e72-4b53-8bbf-9d71b4720148","Type":"ContainerStarted","Data":"b5add213e53436702ef2a4e5c0d2f3b7dd7be4c462abef6536d8409cc8660786"} Nov 29 07:50:31 crc kubenswrapper[4660]: I1129 07:50:31.709888 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e0c494-1877-49d7-8877-308fb75d13b1" path="/var/lib/kubelet/pods/a8e0c494-1877-49d7-8877-308fb75d13b1/volumes" Nov 29 07:50:33 crc kubenswrapper[4660]: I1129 07:50:33.107261 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" event={"ID":"c2698bcc-7e72-4b53-8bbf-9d71b4720148","Type":"ContainerStarted","Data":"ca1ae51090c32b51e5259213907f24952f251809bd8478a3554f2df6b77e38e7"} Nov 29 07:50:33 crc kubenswrapper[4660]: I1129 07:50:33.130299 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" podStartSLOduration=2.31982002 podStartE2EDuration="4.130267994s" podCreationTimestamp="2025-11-29 07:50:29 +0000 UTC" firstStartedPulling="2025-11-29 07:50:30.271043726 +0000 UTC m=+2120.824573625" lastFinishedPulling="2025-11-29 07:50:32.08149169 +0000 UTC m=+2122.635021599" observedRunningTime="2025-11-29 07:50:33.123748268 +0000 UTC m=+2123.677278177" watchObservedRunningTime="2025-11-29 07:50:33.130267994 +0000 UTC m=+2123.683797893" Nov 29 07:50:35 crc kubenswrapper[4660]: I1129 07:50:35.499927 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:50:35 crc kubenswrapper[4660]: I1129 07:50:35.500234 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:50:35 crc kubenswrapper[4660]: I1129 07:50:35.500280 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:50:35 crc kubenswrapper[4660]: I1129 07:50:35.500906 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ed2a620d981176cdc166494c87463a16e5568b6b9687983bde31b6ca61071a5"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:50:35 crc kubenswrapper[4660]: I1129 07:50:35.500969 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://8ed2a620d981176cdc166494c87463a16e5568b6b9687983bde31b6ca61071a5" gracePeriod=600 Nov 29 07:50:36 crc kubenswrapper[4660]: I1129 07:50:36.137192 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="8ed2a620d981176cdc166494c87463a16e5568b6b9687983bde31b6ca61071a5" exitCode=0 Nov 29 07:50:36 crc kubenswrapper[4660]: I1129 07:50:36.137263 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"8ed2a620d981176cdc166494c87463a16e5568b6b9687983bde31b6ca61071a5"} Nov 29 07:50:36 crc kubenswrapper[4660]: I1129 07:50:36.137717 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf"} Nov 29 07:50:36 crc kubenswrapper[4660]: I1129 07:50:36.137752 4660 scope.go:117] "RemoveContainer" containerID="ae98942ef9a1746d3c3e414c2c9cad736cd80e5472c704a2591063ff71781b5c" Nov 29 07:50:45 crc kubenswrapper[4660]: I1129 07:50:45.084346 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-g6zvg"] Nov 29 07:50:45 crc kubenswrapper[4660]: I1129 07:50:45.095986 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-g6zvg"] Nov 29 07:50:45 crc kubenswrapper[4660]: I1129 07:50:45.705426 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a095fb-8968-4986-b063-8652e7e2cd0b" path="/var/lib/kubelet/pods/50a095fb-8968-4986-b063-8652e7e2cd0b/volumes" Nov 29 07:51:05 crc kubenswrapper[4660]: I1129 07:51:05.357711 4660 scope.go:117] "RemoveContainer" containerID="799ebf3bd6cc4f594dd6ed33f3794df92352c8f1e93d05ef46ab674c6481c3b6" Nov 29 07:51:05 crc kubenswrapper[4660]: I1129 07:51:05.386408 4660 scope.go:117] "RemoveContainer" containerID="49c823586286d21c211797d636011301862e0c8db42df626505293156f102fbe" Nov 29 07:51:05 crc kubenswrapper[4660]: I1129 07:51:05.460071 4660 scope.go:117] "RemoveContainer" containerID="e6116c7794fb2072c876f0816b27f85bb269b082e8112814cb8f50c033164b46" Nov 29 07:51:05 crc kubenswrapper[4660]: I1129 07:51:05.503922 4660 scope.go:117] "RemoveContainer" containerID="b0ace9d78f96995af6c400c92ef2053d41212469653a67b0f15650f0c0070cf1" Nov 29 07:51:05 crc kubenswrapper[4660]: I1129 07:51:05.537685 4660 scope.go:117] "RemoveContainer" containerID="ec17e27d5188783f27817fc57273a36da1772c39c9c40da96513f83e73893bb2" Nov 29 07:51:05 crc kubenswrapper[4660]: I1129 07:51:05.578931 4660 scope.go:117] "RemoveContainer" containerID="3123a66dbb851c07c414d2440ddfabf49c61379267a64fb3c92118d02a764047" Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.048361 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6mpvs"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.060538 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8f42b"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.068801 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f1ad-account-create-update-lvpxs"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.082792 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2ffsl"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.089906 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e8ff-account-create-update-mqxsz"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.097720 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6mpvs"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.104395 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f1ad-account-create-update-lvpxs"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.110478 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2ffsl"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.118896 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e8ff-account-create-update-mqxsz"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.126631 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8f42b"] Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.707947 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183b6811-11ff-49be-b2b0-d415851242fa" path="/var/lib/kubelet/pods/183b6811-11ff-49be-b2b0-d415851242fa/volumes" Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.708599 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433c834f-9c69-4b1e-9849-56fd950dcb70" path="/var/lib/kubelet/pods/433c834f-9c69-4b1e-9849-56fd950dcb70/volumes" Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.709144 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e098096-b124-4742-87ed-2358975493a2" path="/var/lib/kubelet/pods/9e098096-b124-4742-87ed-2358975493a2/volumes" Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.709731 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d5204a-1c54-41f1-861f-2812a11a6f37" path="/var/lib/kubelet/pods/c8d5204a-1c54-41f1-861f-2812a11a6f37/volumes" Nov 29 07:51:37 crc kubenswrapper[4660]: I1129 07:51:37.710792 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf386e95-5335-4ed0-b1a5-10744c63370e" path="/var/lib/kubelet/pods/cf386e95-5335-4ed0-b1a5-10744c63370e/volumes" Nov 29 07:51:38 crc kubenswrapper[4660]: I1129 07:51:38.030064 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5fe1-account-create-update-kqp9b"] Nov 29 07:51:38 crc kubenswrapper[4660]: I1129 07:51:38.037803 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5fe1-account-create-update-kqp9b"] Nov 29 07:51:39 crc kubenswrapper[4660]: I1129 07:51:39.708282 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d2c764-f7a5-4f5e-92e2-a4031e313876" path="/var/lib/kubelet/pods/91d2c764-f7a5-4f5e-92e2-a4031e313876/volumes" Nov 29 07:52:05 crc kubenswrapper[4660]: I1129 07:52:05.736496 4660 scope.go:117] "RemoveContainer" containerID="4e6a4ab47e39635438a8c8b7d6982d2f3bc03a81b745b422c0404a0b5d95a523" Nov 29 07:52:05 crc kubenswrapper[4660]: I1129 07:52:05.763055 4660 scope.go:117] "RemoveContainer" containerID="3fb3ae80807a25f4dfcc84da3443e60984567682fb8935d3f22524a15514fed1" Nov 29 07:52:05 crc kubenswrapper[4660]: I1129 07:52:05.827587 4660 scope.go:117] "RemoveContainer" containerID="b2bf92e7f6b9b9474dc20633adbf0e62bfd1166c8af77917d5721be32fdbc2cc" Nov 29 07:52:05 crc kubenswrapper[4660]: I1129 07:52:05.876284 4660 scope.go:117] "RemoveContainer" containerID="15aea278b1150e472d934a005eca50d6d9cf57d5a2c4b77f3991ef62d56caaec" Nov 29 07:52:05 crc kubenswrapper[4660]: I1129 07:52:05.929071 4660 scope.go:117] "RemoveContainer" containerID="cab8d4590880d39596434d0b2f1416d9546adbc78022032acb0ef4a6dd802956" Nov 29 07:52:05 crc kubenswrapper[4660]: I1129 07:52:05.975907 4660 scope.go:117] "RemoveContainer" containerID="f6b2788f5e4eff53ca5c674e275152232395a1885b7c743e60dfbf595b991818" Nov 29 07:52:35 crc kubenswrapper[4660]: I1129 07:52:35.499739 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:52:35 crc kubenswrapper[4660]: I1129 07:52:35.500216 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:52:37 crc kubenswrapper[4660]: I1129 07:52:37.055169 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kllk5"] Nov 29 07:52:37 crc kubenswrapper[4660]: I1129 07:52:37.064690 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kllk5"] Nov 29 07:52:37 crc kubenswrapper[4660]: I1129 07:52:37.705222 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d711e60-e860-4ba2-aa3c-a8219218cd8e" path="/var/lib/kubelet/pods/8d711e60-e860-4ba2-aa3c-a8219218cd8e/volumes" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.373745 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4rcnn"] Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.376301 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.382385 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rcnn"] Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.472792 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-utilities\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.472863 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-catalog-content\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.472882 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tk7\" (UniqueName: \"kubernetes.io/projected/c4aec89f-374e-4386-8aec-48eb166d53d5-kube-api-access-44tk7\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.574417 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-utilities\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.574489 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-catalog-content\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.574507 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44tk7\" (UniqueName: \"kubernetes.io/projected/c4aec89f-374e-4386-8aec-48eb166d53d5-kube-api-access-44tk7\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.575109 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-utilities\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.575125 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-catalog-content\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.593843 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44tk7\" (UniqueName: \"kubernetes.io/projected/c4aec89f-374e-4386-8aec-48eb166d53d5-kube-api-access-44tk7\") pod \"redhat-marketplace-4rcnn\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:38 crc kubenswrapper[4660]: I1129 07:52:38.715134 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:39 crc kubenswrapper[4660]: I1129 07:52:39.234752 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rcnn"] Nov 29 07:52:39 crc kubenswrapper[4660]: I1129 07:52:39.379310 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rcnn" event={"ID":"c4aec89f-374e-4386-8aec-48eb166d53d5","Type":"ContainerStarted","Data":"698819ca2e2233bc5e6a3d90b2acc88c4f6207a77a8887506f1e380e6f14b4dd"} Nov 29 07:52:40 crc kubenswrapper[4660]: I1129 07:52:40.390152 4660 generic.go:334] "Generic (PLEG): container finished" podID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerID="93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c" exitCode=0 Nov 29 07:52:40 crc kubenswrapper[4660]: I1129 07:52:40.390492 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rcnn" event={"ID":"c4aec89f-374e-4386-8aec-48eb166d53d5","Type":"ContainerDied","Data":"93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c"} Nov 29 07:52:42 crc kubenswrapper[4660]: I1129 07:52:42.414101 4660 generic.go:334] "Generic (PLEG): container finished" podID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerID="3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d" exitCode=0 Nov 29 07:52:42 crc kubenswrapper[4660]: I1129 07:52:42.414502 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rcnn" event={"ID":"c4aec89f-374e-4386-8aec-48eb166d53d5","Type":"ContainerDied","Data":"3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d"} Nov 29 07:52:43 crc kubenswrapper[4660]: I1129 07:52:43.442979 4660 generic.go:334] "Generic (PLEG): container finished" podID="c2698bcc-7e72-4b53-8bbf-9d71b4720148" containerID="ca1ae51090c32b51e5259213907f24952f251809bd8478a3554f2df6b77e38e7" exitCode=0 Nov 29 07:52:43 crc kubenswrapper[4660]: I1129 07:52:43.443307 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" event={"ID":"c2698bcc-7e72-4b53-8bbf-9d71b4720148","Type":"ContainerDied","Data":"ca1ae51090c32b51e5259213907f24952f251809bd8478a3554f2df6b77e38e7"} Nov 29 07:52:44 crc kubenswrapper[4660]: I1129 07:52:44.452849 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rcnn" event={"ID":"c4aec89f-374e-4386-8aec-48eb166d53d5","Type":"ContainerStarted","Data":"4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8"} Nov 29 07:52:44 crc kubenswrapper[4660]: I1129 07:52:44.479270 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4rcnn" podStartSLOduration=3.6117276780000003 podStartE2EDuration="6.479247894s" podCreationTimestamp="2025-11-29 07:52:38 +0000 UTC" firstStartedPulling="2025-11-29 07:52:40.393054774 +0000 UTC m=+2250.946584683" lastFinishedPulling="2025-11-29 07:52:43.26057498 +0000 UTC m=+2253.814104899" observedRunningTime="2025-11-29 07:52:44.477936448 +0000 UTC m=+2255.031466357" watchObservedRunningTime="2025-11-29 07:52:44.479247894 +0000 UTC m=+2255.032777793" Nov 29 07:52:44 crc kubenswrapper[4660]: I1129 07:52:44.938003 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.019961 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr6gp\" (UniqueName: \"kubernetes.io/projected/c2698bcc-7e72-4b53-8bbf-9d71b4720148-kube-api-access-hr6gp\") pod \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.020014 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-inventory\") pod \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.020172 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-ssh-key\") pod \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\" (UID: \"c2698bcc-7e72-4b53-8bbf-9d71b4720148\") " Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.026196 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2698bcc-7e72-4b53-8bbf-9d71b4720148-kube-api-access-hr6gp" (OuterVolumeSpecName: "kube-api-access-hr6gp") pod "c2698bcc-7e72-4b53-8bbf-9d71b4720148" (UID: "c2698bcc-7e72-4b53-8bbf-9d71b4720148"). InnerVolumeSpecName "kube-api-access-hr6gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.055947 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-inventory" (OuterVolumeSpecName: "inventory") pod "c2698bcc-7e72-4b53-8bbf-9d71b4720148" (UID: "c2698bcc-7e72-4b53-8bbf-9d71b4720148"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.057226 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c2698bcc-7e72-4b53-8bbf-9d71b4720148" (UID: "c2698bcc-7e72-4b53-8bbf-9d71b4720148"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.122198 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr6gp\" (UniqueName: \"kubernetes.io/projected/c2698bcc-7e72-4b53-8bbf-9d71b4720148-kube-api-access-hr6gp\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.122234 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.122243 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2698bcc-7e72-4b53-8bbf-9d71b4720148-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.478332 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.478376 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5895t" event={"ID":"c2698bcc-7e72-4b53-8bbf-9d71b4720148","Type":"ContainerDied","Data":"b5add213e53436702ef2a4e5c0d2f3b7dd7be4c462abef6536d8409cc8660786"} Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.478454 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5add213e53436702ef2a4e5c0d2f3b7dd7be4c462abef6536d8409cc8660786" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.559578 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz"] Nov 29 07:52:45 crc kubenswrapper[4660]: E1129 07:52:45.559991 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2698bcc-7e72-4b53-8bbf-9d71b4720148" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.560012 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2698bcc-7e72-4b53-8bbf-9d71b4720148" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.560259 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2698bcc-7e72-4b53-8bbf-9d71b4720148" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.561052 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.564063 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.564452 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.564771 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.565359 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.580008 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz"] Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.634379 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.634433 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjvw\" (UniqueName: \"kubernetes.io/projected/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-kube-api-access-nhjvw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.634486 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.736363 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.736432 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjvw\" (UniqueName: \"kubernetes.io/projected/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-kube-api-access-nhjvw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.736482 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.744680 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.746166 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.754863 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjvw\" (UniqueName: \"kubernetes.io/projected/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-kube-api-access-nhjvw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:45 crc kubenswrapper[4660]: I1129 07:52:45.884917 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:52:46 crc kubenswrapper[4660]: I1129 07:52:46.272343 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz"] Nov 29 07:52:46 crc kubenswrapper[4660]: W1129 07:52:46.287220 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d0ffb5c_54ae_48a8_9448_7b78f45814a7.slice/crio-9ad0180c6e8be56cdcdcc4685dddef3e75ec5ac13b93a7605f36fbbaa70e73e4 WatchSource:0}: Error finding container 9ad0180c6e8be56cdcdcc4685dddef3e75ec5ac13b93a7605f36fbbaa70e73e4: Status 404 returned error can't find the container with id 9ad0180c6e8be56cdcdcc4685dddef3e75ec5ac13b93a7605f36fbbaa70e73e4 Nov 29 07:52:46 crc kubenswrapper[4660]: I1129 07:52:46.487085 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" event={"ID":"8d0ffb5c-54ae-48a8-9448-7b78f45814a7","Type":"ContainerStarted","Data":"9ad0180c6e8be56cdcdcc4685dddef3e75ec5ac13b93a7605f36fbbaa70e73e4"} Nov 29 07:52:47 crc kubenswrapper[4660]: I1129 07:52:47.498626 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" event={"ID":"8d0ffb5c-54ae-48a8-9448-7b78f45814a7","Type":"ContainerStarted","Data":"ac2dcce041384c7069c131f6773fa7d59a7b1bbeca4e62325108683de38a5e0b"} Nov 29 07:52:47 crc kubenswrapper[4660]: I1129 07:52:47.525274 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" podStartSLOduration=1.7677948369999998 podStartE2EDuration="2.525248016s" podCreationTimestamp="2025-11-29 07:52:45 +0000 UTC" firstStartedPulling="2025-11-29 07:52:46.288772621 +0000 UTC m=+2256.842302520" lastFinishedPulling="2025-11-29 07:52:47.0462258 +0000 UTC m=+2257.599755699" observedRunningTime="2025-11-29 07:52:47.515224425 +0000 UTC m=+2258.068754334" watchObservedRunningTime="2025-11-29 07:52:47.525248016 +0000 UTC m=+2258.078777935" Nov 29 07:52:48 crc kubenswrapper[4660]: I1129 07:52:48.715275 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:48 crc kubenswrapper[4660]: I1129 07:52:48.715605 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:48 crc kubenswrapper[4660]: I1129 07:52:48.784734 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:49 crc kubenswrapper[4660]: I1129 07:52:49.572872 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:49 crc kubenswrapper[4660]: I1129 07:52:49.647638 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rcnn"] Nov 29 07:52:51 crc kubenswrapper[4660]: I1129 07:52:51.534898 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4rcnn" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="registry-server" containerID="cri-o://4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8" gracePeriod=2 Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.030174 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.203083 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44tk7\" (UniqueName: \"kubernetes.io/projected/c4aec89f-374e-4386-8aec-48eb166d53d5-kube-api-access-44tk7\") pod \"c4aec89f-374e-4386-8aec-48eb166d53d5\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.203509 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-utilities\") pod \"c4aec89f-374e-4386-8aec-48eb166d53d5\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.203528 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-catalog-content\") pod \"c4aec89f-374e-4386-8aec-48eb166d53d5\" (UID: \"c4aec89f-374e-4386-8aec-48eb166d53d5\") " Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.204984 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-utilities" (OuterVolumeSpecName: "utilities") pod "c4aec89f-374e-4386-8aec-48eb166d53d5" (UID: "c4aec89f-374e-4386-8aec-48eb166d53d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.214256 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4aec89f-374e-4386-8aec-48eb166d53d5-kube-api-access-44tk7" (OuterVolumeSpecName: "kube-api-access-44tk7") pod "c4aec89f-374e-4386-8aec-48eb166d53d5" (UID: "c4aec89f-374e-4386-8aec-48eb166d53d5"). InnerVolumeSpecName "kube-api-access-44tk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.221865 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4aec89f-374e-4386-8aec-48eb166d53d5" (UID: "c4aec89f-374e-4386-8aec-48eb166d53d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.305399 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44tk7\" (UniqueName: \"kubernetes.io/projected/c4aec89f-374e-4386-8aec-48eb166d53d5-kube-api-access-44tk7\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.305437 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.305446 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4aec89f-374e-4386-8aec-48eb166d53d5-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.544190 4660 generic.go:334] "Generic (PLEG): container finished" podID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerID="4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8" exitCode=0 Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.544233 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rcnn" event={"ID":"c4aec89f-374e-4386-8aec-48eb166d53d5","Type":"ContainerDied","Data":"4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8"} Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.544259 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rcnn" event={"ID":"c4aec89f-374e-4386-8aec-48eb166d53d5","Type":"ContainerDied","Data":"698819ca2e2233bc5e6a3d90b2acc88c4f6207a77a8887506f1e380e6f14b4dd"} Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.544275 4660 scope.go:117] "RemoveContainer" containerID="4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.544382 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rcnn" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.567430 4660 scope.go:117] "RemoveContainer" containerID="3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.589471 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rcnn"] Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.602991 4660 scope.go:117] "RemoveContainer" containerID="93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.604862 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rcnn"] Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.646069 4660 scope.go:117] "RemoveContainer" containerID="4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8" Nov 29 07:52:52 crc kubenswrapper[4660]: E1129 07:52:52.646561 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8\": container with ID starting with 4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8 not found: ID does not exist" containerID="4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.646594 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8"} err="failed to get container status \"4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8\": rpc error: code = NotFound desc = could not find container \"4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8\": container with ID starting with 4d0040b42d6d71eef8144e0a1680071cb3ac643e4413b1e5aedf1d1336ce39c8 not found: ID does not exist" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.646630 4660 scope.go:117] "RemoveContainer" containerID="3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d" Nov 29 07:52:52 crc kubenswrapper[4660]: E1129 07:52:52.647138 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d\": container with ID starting with 3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d not found: ID does not exist" containerID="3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.647223 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d"} err="failed to get container status \"3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d\": rpc error: code = NotFound desc = could not find container \"3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d\": container with ID starting with 3656f876633b09ffa6f918a78814f9e2161ba80d9e97fdf81326c3ffb2c2ac6d not found: ID does not exist" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.647276 4660 scope.go:117] "RemoveContainer" containerID="93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c" Nov 29 07:52:52 crc kubenswrapper[4660]: E1129 07:52:52.647855 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c\": container with ID starting with 93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c not found: ID does not exist" containerID="93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c" Nov 29 07:52:52 crc kubenswrapper[4660]: I1129 07:52:52.647881 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c"} err="failed to get container status \"93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c\": rpc error: code = NotFound desc = could not find container \"93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c\": container with ID starting with 93f159068ff8bd4e5c29a83bec00385b9ccf4dea79176e42660714fdaacd108c not found: ID does not exist" Nov 29 07:52:53 crc kubenswrapper[4660]: I1129 07:52:53.704386 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" path="/var/lib/kubelet/pods/c4aec89f-374e-4386-8aec-48eb166d53d5/volumes" Nov 29 07:53:05 crc kubenswrapper[4660]: I1129 07:53:05.499842 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:53:05 crc kubenswrapper[4660]: I1129 07:53:05.500278 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:53:06 crc kubenswrapper[4660]: I1129 07:53:06.114193 4660 scope.go:117] "RemoveContainer" containerID="3fae29d4663d2aa9eac693cec9866e8b43aeb879305b90aadac97f32893cedd9" Nov 29 07:53:08 crc kubenswrapper[4660]: I1129 07:53:08.046720 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bgj57"] Nov 29 07:53:08 crc kubenswrapper[4660]: I1129 07:53:08.054189 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bgj57"] Nov 29 07:53:09 crc kubenswrapper[4660]: I1129 07:53:09.709734 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346088fe-bc54-45fa-95b1-7264614a2988" path="/var/lib/kubelet/pods/346088fe-bc54-45fa-95b1-7264614a2988/volumes" Nov 29 07:53:19 crc kubenswrapper[4660]: I1129 07:53:19.043439 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pbghl"] Nov 29 07:53:19 crc kubenswrapper[4660]: I1129 07:53:19.052003 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pbghl"] Nov 29 07:53:19 crc kubenswrapper[4660]: I1129 07:53:19.706414 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e208f39-0b45-484b-9bfb-9b0747126b84" path="/var/lib/kubelet/pods/7e208f39-0b45-484b-9bfb-9b0747126b84/volumes" Nov 29 07:53:35 crc kubenswrapper[4660]: I1129 07:53:35.500183 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 07:53:35 crc kubenswrapper[4660]: I1129 07:53:35.500738 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 07:53:35 crc kubenswrapper[4660]: I1129 07:53:35.500789 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 07:53:35 crc kubenswrapper[4660]: I1129 07:53:35.501538 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 07:53:35 crc kubenswrapper[4660]: I1129 07:53:35.501604 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" gracePeriod=600 Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.038394 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hjwcb"] Nov 29 07:53:36 crc kubenswrapper[4660]: E1129 07:53:36.038911 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="registry-server" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.038934 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="registry-server" Nov 29 07:53:36 crc kubenswrapper[4660]: E1129 07:53:36.038952 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="extract-utilities" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.038960 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="extract-utilities" Nov 29 07:53:36 crc kubenswrapper[4660]: E1129 07:53:36.038973 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="extract-content" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.038981 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="extract-content" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.039196 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4aec89f-374e-4386-8aec-48eb166d53d5" containerName="registry-server" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.040905 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.051154 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjwcb"] Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.089217 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k689\" (UniqueName: \"kubernetes.io/projected/5504a36b-2846-425a-bc3a-7524ce3ad045-kube-api-access-6k689\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.089291 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-utilities\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.089338 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-catalog-content\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.194300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k689\" (UniqueName: \"kubernetes.io/projected/5504a36b-2846-425a-bc3a-7524ce3ad045-kube-api-access-6k689\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.194673 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-utilities\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.194738 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-catalog-content\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.195131 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-utilities\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.196052 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-catalog-content\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.228532 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k689\" (UniqueName: \"kubernetes.io/projected/5504a36b-2846-425a-bc3a-7524ce3ad045-kube-api-access-6k689\") pod \"community-operators-hjwcb\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.254878 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ghp89"] Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.256592 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.271163 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghp89"] Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.296454 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-utilities\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.296556 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nngz\" (UniqueName: \"kubernetes.io/projected/dede6d37-0424-4107-a7e2-fd8290982192-kube-api-access-4nngz\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.296580 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-catalog-content\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.361859 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.397884 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nngz\" (UniqueName: \"kubernetes.io/projected/dede6d37-0424-4107-a7e2-fd8290982192-kube-api-access-4nngz\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.397941 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-catalog-content\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.398367 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-utilities\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.398517 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-catalog-content\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.398801 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-utilities\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.443177 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nngz\" (UniqueName: \"kubernetes.io/projected/dede6d37-0424-4107-a7e2-fd8290982192-kube-api-access-4nngz\") pod \"certified-operators-ghp89\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:36 crc kubenswrapper[4660]: I1129 07:53:36.615875 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.026245 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjwcb"] Nov 29 07:53:37 crc kubenswrapper[4660]: W1129 07:53:37.233571 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddede6d37_0424_4107_a7e2_fd8290982192.slice/crio-2eb0290f647e343e2bf3516b97d36aca582a3609175d7b94bc19edf63cb45a24 WatchSource:0}: Error finding container 2eb0290f647e343e2bf3516b97d36aca582a3609175d7b94bc19edf63cb45a24: Status 404 returned error can't find the container with id 2eb0290f647e343e2bf3516b97d36aca582a3609175d7b94bc19edf63cb45a24 Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.233729 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghp89"] Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.983614 4660 generic.go:334] "Generic (PLEG): container finished" podID="dede6d37-0424-4107-a7e2-fd8290982192" containerID="25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0" exitCode=0 Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.983744 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghp89" event={"ID":"dede6d37-0424-4107-a7e2-fd8290982192","Type":"ContainerDied","Data":"25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0"} Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.984005 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghp89" event={"ID":"dede6d37-0424-4107-a7e2-fd8290982192","Type":"ContainerStarted","Data":"2eb0290f647e343e2bf3516b97d36aca582a3609175d7b94bc19edf63cb45a24"} Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.987297 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" exitCode=0 Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.987362 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf"} Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.987397 4660 scope.go:117] "RemoveContainer" containerID="8ed2a620d981176cdc166494c87463a16e5568b6b9687983bde31b6ca61071a5" Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.990670 4660 generic.go:334] "Generic (PLEG): container finished" podID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerID="c5a0bb9991ffc71593f6495f8889a9527fd4480ea1d83e01093c0cb0f61a6af7" exitCode=0 Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.990721 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjwcb" event={"ID":"5504a36b-2846-425a-bc3a-7524ce3ad045","Type":"ContainerDied","Data":"c5a0bb9991ffc71593f6495f8889a9527fd4480ea1d83e01093c0cb0f61a6af7"} Nov 29 07:53:37 crc kubenswrapper[4660]: I1129 07:53:37.990755 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjwcb" event={"ID":"5504a36b-2846-425a-bc3a-7524ce3ad045","Type":"ContainerStarted","Data":"04421aa61a86f76e8ca82432172fba74168157b42741dbb9faca4cf9fe087b89"} Nov 29 07:53:38 crc kubenswrapper[4660]: E1129 07:53:38.418673 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:53:39 crc kubenswrapper[4660]: I1129 07:53:39.000140 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:53:39 crc kubenswrapper[4660]: E1129 07:53:39.000366 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:53:40 crc kubenswrapper[4660]: I1129 07:53:40.015347 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghp89" event={"ID":"dede6d37-0424-4107-a7e2-fd8290982192","Type":"ContainerStarted","Data":"13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a"} Nov 29 07:53:40 crc kubenswrapper[4660]: I1129 07:53:40.017901 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjwcb" event={"ID":"5504a36b-2846-425a-bc3a-7524ce3ad045","Type":"ContainerStarted","Data":"13c2dc7b69c5bc87462fb1464e0b4e5138e9128f3aaef8c78ab5f5aaa77eb2e0"} Nov 29 07:53:43 crc kubenswrapper[4660]: I1129 07:53:43.042893 4660 generic.go:334] "Generic (PLEG): container finished" podID="dede6d37-0424-4107-a7e2-fd8290982192" containerID="13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a" exitCode=0 Nov 29 07:53:43 crc kubenswrapper[4660]: I1129 07:53:43.042988 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghp89" event={"ID":"dede6d37-0424-4107-a7e2-fd8290982192","Type":"ContainerDied","Data":"13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a"} Nov 29 07:53:43 crc kubenswrapper[4660]: I1129 07:53:43.046440 4660 generic.go:334] "Generic (PLEG): container finished" podID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerID="13c2dc7b69c5bc87462fb1464e0b4e5138e9128f3aaef8c78ab5f5aaa77eb2e0" exitCode=0 Nov 29 07:53:43 crc kubenswrapper[4660]: I1129 07:53:43.046533 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjwcb" event={"ID":"5504a36b-2846-425a-bc3a-7524ce3ad045","Type":"ContainerDied","Data":"13c2dc7b69c5bc87462fb1464e0b4e5138e9128f3aaef8c78ab5f5aaa77eb2e0"} Nov 29 07:53:44 crc kubenswrapper[4660]: I1129 07:53:44.056281 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghp89" event={"ID":"dede6d37-0424-4107-a7e2-fd8290982192","Type":"ContainerStarted","Data":"e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd"} Nov 29 07:53:44 crc kubenswrapper[4660]: I1129 07:53:44.059902 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjwcb" event={"ID":"5504a36b-2846-425a-bc3a-7524ce3ad045","Type":"ContainerStarted","Data":"12253566fda1d56c75aeeb3c2f826aa1bbe9b79b3ec1562aeda3df2cb7440187"} Nov 29 07:53:44 crc kubenswrapper[4660]: I1129 07:53:44.077093 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ghp89" podStartSLOduration=2.571384022 podStartE2EDuration="8.077068155s" podCreationTimestamp="2025-11-29 07:53:36 +0000 UTC" firstStartedPulling="2025-11-29 07:53:37.985727588 +0000 UTC m=+2308.539257487" lastFinishedPulling="2025-11-29 07:53:43.491411721 +0000 UTC m=+2314.044941620" observedRunningTime="2025-11-29 07:53:44.072299146 +0000 UTC m=+2314.625829045" watchObservedRunningTime="2025-11-29 07:53:44.077068155 +0000 UTC m=+2314.630598054" Nov 29 07:53:44 crc kubenswrapper[4660]: I1129 07:53:44.111465 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hjwcb" podStartSLOduration=2.529507531 podStartE2EDuration="8.111449242s" podCreationTimestamp="2025-11-29 07:53:36 +0000 UTC" firstStartedPulling="2025-11-29 07:53:37.995480092 +0000 UTC m=+2308.549009981" lastFinishedPulling="2025-11-29 07:53:43.577421793 +0000 UTC m=+2314.130951692" observedRunningTime="2025-11-29 07:53:44.107018493 +0000 UTC m=+2314.660548392" watchObservedRunningTime="2025-11-29 07:53:44.111449242 +0000 UTC m=+2314.664979141" Nov 29 07:53:46 crc kubenswrapper[4660]: I1129 07:53:46.362725 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:46 crc kubenswrapper[4660]: I1129 07:53:46.363547 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:46 crc kubenswrapper[4660]: I1129 07:53:46.616043 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:46 crc kubenswrapper[4660]: I1129 07:53:46.616085 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:46 crc kubenswrapper[4660]: I1129 07:53:46.673769 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:47 crc kubenswrapper[4660]: I1129 07:53:47.412002 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hjwcb" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="registry-server" probeResult="failure" output=< Nov 29 07:53:47 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 07:53:47 crc kubenswrapper[4660]: > Nov 29 07:53:52 crc kubenswrapper[4660]: I1129 07:53:52.693091 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:53:52 crc kubenswrapper[4660]: E1129 07:53:52.723426 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:53:56 crc kubenswrapper[4660]: I1129 07:53:56.430005 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:56 crc kubenswrapper[4660]: I1129 07:53:56.477174 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:53:56 crc kubenswrapper[4660]: I1129 07:53:56.656790 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:53:57 crc kubenswrapper[4660]: I1129 07:53:57.036703 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jx947"] Nov 29 07:53:57 crc kubenswrapper[4660]: I1129 07:53:57.045358 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jx947"] Nov 29 07:53:57 crc kubenswrapper[4660]: I1129 07:53:57.709564 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71369e18-7325-4509-9c64-2f59afb7513c" path="/var/lib/kubelet/pods/71369e18-7325-4509-9c64-2f59afb7513c/volumes" Nov 29 07:54:00 crc kubenswrapper[4660]: I1129 07:54:00.828367 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjwcb"] Nov 29 07:54:00 crc kubenswrapper[4660]: I1129 07:54:00.828828 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hjwcb" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="registry-server" containerID="cri-o://12253566fda1d56c75aeeb3c2f826aa1bbe9b79b3ec1562aeda3df2cb7440187" gracePeriod=2 Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.216755 4660 generic.go:334] "Generic (PLEG): container finished" podID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerID="12253566fda1d56c75aeeb3c2f826aa1bbe9b79b3ec1562aeda3df2cb7440187" exitCode=0 Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.216823 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjwcb" event={"ID":"5504a36b-2846-425a-bc3a-7524ce3ad045","Type":"ContainerDied","Data":"12253566fda1d56c75aeeb3c2f826aa1bbe9b79b3ec1562aeda3df2cb7440187"} Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.238043 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghp89"] Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.238660 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ghp89" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="registry-server" containerID="cri-o://e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd" gracePeriod=2 Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.468123 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.591243 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-catalog-content\") pod \"5504a36b-2846-425a-bc3a-7524ce3ad045\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.591343 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k689\" (UniqueName: \"kubernetes.io/projected/5504a36b-2846-425a-bc3a-7524ce3ad045-kube-api-access-6k689\") pod \"5504a36b-2846-425a-bc3a-7524ce3ad045\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.591432 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-utilities\") pod \"5504a36b-2846-425a-bc3a-7524ce3ad045\" (UID: \"5504a36b-2846-425a-bc3a-7524ce3ad045\") " Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.592423 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-utilities" (OuterVolumeSpecName: "utilities") pod "5504a36b-2846-425a-bc3a-7524ce3ad045" (UID: "5504a36b-2846-425a-bc3a-7524ce3ad045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.597643 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5504a36b-2846-425a-bc3a-7524ce3ad045-kube-api-access-6k689" (OuterVolumeSpecName: "kube-api-access-6k689") pod "5504a36b-2846-425a-bc3a-7524ce3ad045" (UID: "5504a36b-2846-425a-bc3a-7524ce3ad045"). InnerVolumeSpecName "kube-api-access-6k689". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.627436 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.651327 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5504a36b-2846-425a-bc3a-7524ce3ad045" (UID: "5504a36b-2846-425a-bc3a-7524ce3ad045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.693654 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.693884 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k689\" (UniqueName: \"kubernetes.io/projected/5504a36b-2846-425a-bc3a-7524ce3ad045-kube-api-access-6k689\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.693941 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5504a36b-2846-425a-bc3a-7524ce3ad045-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.794713 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-catalog-content\") pod \"dede6d37-0424-4107-a7e2-fd8290982192\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.794942 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nngz\" (UniqueName: \"kubernetes.io/projected/dede6d37-0424-4107-a7e2-fd8290982192-kube-api-access-4nngz\") pod \"dede6d37-0424-4107-a7e2-fd8290982192\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.795018 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-utilities\") pod \"dede6d37-0424-4107-a7e2-fd8290982192\" (UID: \"dede6d37-0424-4107-a7e2-fd8290982192\") " Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.795814 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-utilities" (OuterVolumeSpecName: "utilities") pod "dede6d37-0424-4107-a7e2-fd8290982192" (UID: "dede6d37-0424-4107-a7e2-fd8290982192"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.797688 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dede6d37-0424-4107-a7e2-fd8290982192-kube-api-access-4nngz" (OuterVolumeSpecName: "kube-api-access-4nngz") pod "dede6d37-0424-4107-a7e2-fd8290982192" (UID: "dede6d37-0424-4107-a7e2-fd8290982192"). InnerVolumeSpecName "kube-api-access-4nngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.848355 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dede6d37-0424-4107-a7e2-fd8290982192" (UID: "dede6d37-0424-4107-a7e2-fd8290982192"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.896661 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nngz\" (UniqueName: \"kubernetes.io/projected/dede6d37-0424-4107-a7e2-fd8290982192-kube-api-access-4nngz\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.896692 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:01 crc kubenswrapper[4660]: I1129 07:54:01.896703 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dede6d37-0424-4107-a7e2-fd8290982192-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.228841 4660 generic.go:334] "Generic (PLEG): container finished" podID="dede6d37-0424-4107-a7e2-fd8290982192" containerID="e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd" exitCode=0 Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.228910 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghp89" event={"ID":"dede6d37-0424-4107-a7e2-fd8290982192","Type":"ContainerDied","Data":"e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd"} Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.229025 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghp89" event={"ID":"dede6d37-0424-4107-a7e2-fd8290982192","Type":"ContainerDied","Data":"2eb0290f647e343e2bf3516b97d36aca582a3609175d7b94bc19edf63cb45a24"} Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.229067 4660 scope.go:117] "RemoveContainer" containerID="e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.229455 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghp89" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.232154 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjwcb" event={"ID":"5504a36b-2846-425a-bc3a-7524ce3ad045","Type":"ContainerDied","Data":"04421aa61a86f76e8ca82432172fba74168157b42741dbb9faca4cf9fe087b89"} Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.232257 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjwcb" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.263499 4660 scope.go:117] "RemoveContainer" containerID="13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.279049 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjwcb"] Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.298519 4660 scope.go:117] "RemoveContainer" containerID="25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.313755 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hjwcb"] Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.328208 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghp89"] Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.338039 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ghp89"] Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.373139 4660 scope.go:117] "RemoveContainer" containerID="e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd" Nov 29 07:54:02 crc kubenswrapper[4660]: E1129 07:54:02.374811 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd\": container with ID starting with e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd not found: ID does not exist" containerID="e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.374847 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd"} err="failed to get container status \"e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd\": rpc error: code = NotFound desc = could not find container \"e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd\": container with ID starting with e799e97d3e1480050d972b0fb906e333c7cdb58ff42a41ec86d80c711e0b66cd not found: ID does not exist" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.374873 4660 scope.go:117] "RemoveContainer" containerID="13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a" Nov 29 07:54:02 crc kubenswrapper[4660]: E1129 07:54:02.376775 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a\": container with ID starting with 13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a not found: ID does not exist" containerID="13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.376808 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a"} err="failed to get container status \"13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a\": rpc error: code = NotFound desc = could not find container \"13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a\": container with ID starting with 13eef949b540dbcb4c971983bfb81e6dfe6a86cc29061eba65240bf88b44676a not found: ID does not exist" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.376831 4660 scope.go:117] "RemoveContainer" containerID="25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0" Nov 29 07:54:02 crc kubenswrapper[4660]: E1129 07:54:02.377189 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0\": container with ID starting with 25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0 not found: ID does not exist" containerID="25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.377216 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0"} err="failed to get container status \"25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0\": rpc error: code = NotFound desc = could not find container \"25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0\": container with ID starting with 25c1c6b65d91e614d8c35810916e9e2d7e0364ddd83bbcfc78424d2442263bf0 not found: ID does not exist" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.377233 4660 scope.go:117] "RemoveContainer" containerID="12253566fda1d56c75aeeb3c2f826aa1bbe9b79b3ec1562aeda3df2cb7440187" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.411734 4660 scope.go:117] "RemoveContainer" containerID="13c2dc7b69c5bc87462fb1464e0b4e5138e9128f3aaef8c78ab5f5aaa77eb2e0" Nov 29 07:54:02 crc kubenswrapper[4660]: I1129 07:54:02.435561 4660 scope.go:117] "RemoveContainer" containerID="c5a0bb9991ffc71593f6495f8889a9527fd4480ea1d83e01093c0cb0f61a6af7" Nov 29 07:54:03 crc kubenswrapper[4660]: I1129 07:54:03.695016 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:54:03 crc kubenswrapper[4660]: E1129 07:54:03.696259 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:54:03 crc kubenswrapper[4660]: I1129 07:54:03.716769 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" path="/var/lib/kubelet/pods/5504a36b-2846-425a-bc3a-7524ce3ad045/volumes" Nov 29 07:54:03 crc kubenswrapper[4660]: I1129 07:54:03.720584 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dede6d37-0424-4107-a7e2-fd8290982192" path="/var/lib/kubelet/pods/dede6d37-0424-4107-a7e2-fd8290982192/volumes" Nov 29 07:54:06 crc kubenswrapper[4660]: I1129 07:54:06.203880 4660 scope.go:117] "RemoveContainer" containerID="2025550a58bb24ceca22edb6c65cc03c22fefc72a454efba09b009650479bb20" Nov 29 07:54:06 crc kubenswrapper[4660]: I1129 07:54:06.256581 4660 scope.go:117] "RemoveContainer" containerID="4455822cba4968313c1e901f313177b4707d2332c20ef84973d89548333aba4d" Nov 29 07:54:06 crc kubenswrapper[4660]: I1129 07:54:06.321643 4660 scope.go:117] "RemoveContainer" containerID="8da211a0ec399596effa2ceccb3e41bd54d57bc3950675de040c5d4e1ddf623a" Nov 29 07:54:08 crc kubenswrapper[4660]: I1129 07:54:08.289438 4660 generic.go:334] "Generic (PLEG): container finished" podID="8d0ffb5c-54ae-48a8-9448-7b78f45814a7" containerID="ac2dcce041384c7069c131f6773fa7d59a7b1bbeca4e62325108683de38a5e0b" exitCode=0 Nov 29 07:54:08 crc kubenswrapper[4660]: I1129 07:54:08.289522 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" event={"ID":"8d0ffb5c-54ae-48a8-9448-7b78f45814a7","Type":"ContainerDied","Data":"ac2dcce041384c7069c131f6773fa7d59a7b1bbeca4e62325108683de38a5e0b"} Nov 29 07:54:09 crc kubenswrapper[4660]: I1129 07:54:09.805257 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:54:09 crc kubenswrapper[4660]: I1129 07:54:09.940722 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-inventory\") pod \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " Nov 29 07:54:09 crc kubenswrapper[4660]: I1129 07:54:09.940958 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhjvw\" (UniqueName: \"kubernetes.io/projected/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-kube-api-access-nhjvw\") pod \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " Nov 29 07:54:09 crc kubenswrapper[4660]: I1129 07:54:09.941039 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-ssh-key\") pod \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\" (UID: \"8d0ffb5c-54ae-48a8-9448-7b78f45814a7\") " Nov 29 07:54:09 crc kubenswrapper[4660]: I1129 07:54:09.947409 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-kube-api-access-nhjvw" (OuterVolumeSpecName: "kube-api-access-nhjvw") pod "8d0ffb5c-54ae-48a8-9448-7b78f45814a7" (UID: "8d0ffb5c-54ae-48a8-9448-7b78f45814a7"). InnerVolumeSpecName "kube-api-access-nhjvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:09 crc kubenswrapper[4660]: I1129 07:54:09.976900 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-inventory" (OuterVolumeSpecName: "inventory") pod "8d0ffb5c-54ae-48a8-9448-7b78f45814a7" (UID: "8d0ffb5c-54ae-48a8-9448-7b78f45814a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:09 crc kubenswrapper[4660]: I1129 07:54:09.977373 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8d0ffb5c-54ae-48a8-9448-7b78f45814a7" (UID: "8d0ffb5c-54ae-48a8-9448-7b78f45814a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.043806 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhjvw\" (UniqueName: \"kubernetes.io/projected/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-kube-api-access-nhjvw\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.044026 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.044135 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d0ffb5c-54ae-48a8-9448-7b78f45814a7-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.315087 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" event={"ID":"8d0ffb5c-54ae-48a8-9448-7b78f45814a7","Type":"ContainerDied","Data":"9ad0180c6e8be56cdcdcc4685dddef3e75ec5ac13b93a7605f36fbbaa70e73e4"} Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.315126 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad0180c6e8be56cdcdcc4685dddef3e75ec5ac13b93a7605f36fbbaa70e73e4" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.315177 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.410954 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss"] Nov 29 07:54:10 crc kubenswrapper[4660]: E1129 07:54:10.411456 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="extract-utilities" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.411484 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="extract-utilities" Nov 29 07:54:10 crc kubenswrapper[4660]: E1129 07:54:10.411559 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="extract-content" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.411572 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="extract-content" Nov 29 07:54:10 crc kubenswrapper[4660]: E1129 07:54:10.411604 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="extract-utilities" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.411638 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="extract-utilities" Nov 29 07:54:10 crc kubenswrapper[4660]: E1129 07:54:10.411659 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0ffb5c-54ae-48a8-9448-7b78f45814a7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.411672 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0ffb5c-54ae-48a8-9448-7b78f45814a7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:54:10 crc kubenswrapper[4660]: E1129 07:54:10.411694 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="registry-server" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.411706 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="registry-server" Nov 29 07:54:10 crc kubenswrapper[4660]: E1129 07:54:10.411726 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="registry-server" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.411736 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="registry-server" Nov 29 07:54:10 crc kubenswrapper[4660]: E1129 07:54:10.411748 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="extract-content" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.411758 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="extract-content" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.412049 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="5504a36b-2846-425a-bc3a-7524ce3ad045" containerName="registry-server" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.412090 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0ffb5c-54ae-48a8-9448-7b78f45814a7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.412112 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="dede6d37-0424-4107-a7e2-fd8290982192" containerName="registry-server" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.414553 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.416308 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.418874 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.418927 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.419340 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.423547 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss"] Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.553318 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.553435 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpq5\" (UniqueName: \"kubernetes.io/projected/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-kube-api-access-2vpq5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.553477 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.654816 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.655109 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpq5\" (UniqueName: \"kubernetes.io/projected/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-kube-api-access-2vpq5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.655149 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.659053 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.659064 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.671821 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpq5\" (UniqueName: \"kubernetes.io/projected/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-kube-api-access-2vpq5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rwss\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:10 crc kubenswrapper[4660]: I1129 07:54:10.732010 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:11 crc kubenswrapper[4660]: I1129 07:54:11.286942 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss"] Nov 29 07:54:11 crc kubenswrapper[4660]: I1129 07:54:11.336516 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" event={"ID":"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f","Type":"ContainerStarted","Data":"f7b02a8b046ff5958f988a41db3d7d499529c0737243ceb49c7cdf7e14361c28"} Nov 29 07:54:13 crc kubenswrapper[4660]: I1129 07:54:13.359258 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" event={"ID":"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f","Type":"ContainerStarted","Data":"8c7756ef5ded5926060cf6f1cca84a86dc1474126628986b37d1f4fad28e2478"} Nov 29 07:54:13 crc kubenswrapper[4660]: I1129 07:54:13.381545 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" podStartSLOduration=2.447517881 podStartE2EDuration="3.381524165s" podCreationTimestamp="2025-11-29 07:54:10 +0000 UTC" firstStartedPulling="2025-11-29 07:54:11.290207873 +0000 UTC m=+2341.843737772" lastFinishedPulling="2025-11-29 07:54:12.224214157 +0000 UTC m=+2342.777744056" observedRunningTime="2025-11-29 07:54:13.378859263 +0000 UTC m=+2343.932389162" watchObservedRunningTime="2025-11-29 07:54:13.381524165 +0000 UTC m=+2343.935054074" Nov 29 07:54:15 crc kubenswrapper[4660]: I1129 07:54:15.693986 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:54:15 crc kubenswrapper[4660]: E1129 07:54:15.694799 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:54:18 crc kubenswrapper[4660]: I1129 07:54:18.408919 4660 generic.go:334] "Generic (PLEG): container finished" podID="fb67bcf4-d0ed-4dbb-b571-322a52c4c43f" containerID="8c7756ef5ded5926060cf6f1cca84a86dc1474126628986b37d1f4fad28e2478" exitCode=0 Nov 29 07:54:18 crc kubenswrapper[4660]: I1129 07:54:18.409007 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" event={"ID":"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f","Type":"ContainerDied","Data":"8c7756ef5ded5926060cf6f1cca84a86dc1474126628986b37d1f4fad28e2478"} Nov 29 07:54:19 crc kubenswrapper[4660]: I1129 07:54:19.813166 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:19 crc kubenswrapper[4660]: I1129 07:54:19.949334 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-ssh-key\") pod \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " Nov 29 07:54:19 crc kubenswrapper[4660]: I1129 07:54:19.949739 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-inventory\") pod \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " Nov 29 07:54:19 crc kubenswrapper[4660]: I1129 07:54:19.950038 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vpq5\" (UniqueName: \"kubernetes.io/projected/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-kube-api-access-2vpq5\") pod \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\" (UID: \"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f\") " Nov 29 07:54:19 crc kubenswrapper[4660]: I1129 07:54:19.956639 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-kube-api-access-2vpq5" (OuterVolumeSpecName: "kube-api-access-2vpq5") pod "fb67bcf4-d0ed-4dbb-b571-322a52c4c43f" (UID: "fb67bcf4-d0ed-4dbb-b571-322a52c4c43f"). InnerVolumeSpecName "kube-api-access-2vpq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:54:19 crc kubenswrapper[4660]: I1129 07:54:19.989472 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb67bcf4-d0ed-4dbb-b571-322a52c4c43f" (UID: "fb67bcf4-d0ed-4dbb-b571-322a52c4c43f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:19 crc kubenswrapper[4660]: I1129 07:54:19.990552 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-inventory" (OuterVolumeSpecName: "inventory") pod "fb67bcf4-d0ed-4dbb-b571-322a52c4c43f" (UID: "fb67bcf4-d0ed-4dbb-b571-322a52c4c43f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.053810 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.053844 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.053860 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vpq5\" (UniqueName: \"kubernetes.io/projected/fb67bcf4-d0ed-4dbb-b571-322a52c4c43f-kube-api-access-2vpq5\") on node \"crc\" DevicePath \"\"" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.430314 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" event={"ID":"fb67bcf4-d0ed-4dbb-b571-322a52c4c43f","Type":"ContainerDied","Data":"f7b02a8b046ff5958f988a41db3d7d499529c0737243ceb49c7cdf7e14361c28"} Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.430352 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7b02a8b046ff5958f988a41db3d7d499529c0737243ceb49c7cdf7e14361c28" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.430391 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rwss" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.517921 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk"] Nov 29 07:54:20 crc kubenswrapper[4660]: E1129 07:54:20.518799 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb67bcf4-d0ed-4dbb-b571-322a52c4c43f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.518929 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb67bcf4-d0ed-4dbb-b571-322a52c4c43f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.519359 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb67bcf4-d0ed-4dbb-b571-322a52c4c43f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.520493 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.525532 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.525552 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.525653 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.525782 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.530059 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk"] Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.664962 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.665451 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.665604 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjpg\" (UniqueName: \"kubernetes.io/projected/955fb591-0de6-4f55-a61f-fc232791fe54-kube-api-access-zpjpg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.767725 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.767904 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.767947 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjpg\" (UniqueName: \"kubernetes.io/projected/955fb591-0de6-4f55-a61f-fc232791fe54-kube-api-access-zpjpg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.772844 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.781219 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.784923 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjpg\" (UniqueName: \"kubernetes.io/projected/955fb591-0de6-4f55-a61f-fc232791fe54-kube-api-access-zpjpg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thxwk\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:20 crc kubenswrapper[4660]: I1129 07:54:20.836912 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:54:21 crc kubenswrapper[4660]: I1129 07:54:21.410047 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk"] Nov 29 07:54:21 crc kubenswrapper[4660]: I1129 07:54:21.446754 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" event={"ID":"955fb591-0de6-4f55-a61f-fc232791fe54","Type":"ContainerStarted","Data":"82cee8690186ba7292445c6ca0c77bb7997bbe65a9c772a65b7c7e6e7aa58a2c"} Nov 29 07:54:26 crc kubenswrapper[4660]: I1129 07:54:26.510456 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" event={"ID":"955fb591-0de6-4f55-a61f-fc232791fe54","Type":"ContainerStarted","Data":"ff73357f4a8926f1beb575d0c9327e0808997a00f71c5d102b31c0a5d11b38d4"} Nov 29 07:54:26 crc kubenswrapper[4660]: I1129 07:54:26.547156 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" podStartSLOduration=2.75703541 podStartE2EDuration="6.54713598s" podCreationTimestamp="2025-11-29 07:54:20 +0000 UTC" firstStartedPulling="2025-11-29 07:54:21.415537842 +0000 UTC m=+2351.969067731" lastFinishedPulling="2025-11-29 07:54:25.205638402 +0000 UTC m=+2355.759168301" observedRunningTime="2025-11-29 07:54:26.53712034 +0000 UTC m=+2357.090650319" watchObservedRunningTime="2025-11-29 07:54:26.54713598 +0000 UTC m=+2357.100665889" Nov 29 07:54:27 crc kubenswrapper[4660]: I1129 07:54:27.693450 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:54:27 crc kubenswrapper[4660]: E1129 07:54:27.693716 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:54:40 crc kubenswrapper[4660]: I1129 07:54:40.693654 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:54:40 crc kubenswrapper[4660]: E1129 07:54:40.694396 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:54:53 crc kubenswrapper[4660]: I1129 07:54:53.694258 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:54:53 crc kubenswrapper[4660]: E1129 07:54:53.695132 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:55:08 crc kubenswrapper[4660]: I1129 07:55:08.694307 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:55:08 crc kubenswrapper[4660]: E1129 07:55:08.696386 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:55:11 crc kubenswrapper[4660]: I1129 07:55:11.941009 4660 generic.go:334] "Generic (PLEG): container finished" podID="955fb591-0de6-4f55-a61f-fc232791fe54" containerID="ff73357f4a8926f1beb575d0c9327e0808997a00f71c5d102b31c0a5d11b38d4" exitCode=0 Nov 29 07:55:11 crc kubenswrapper[4660]: I1129 07:55:11.941136 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" event={"ID":"955fb591-0de6-4f55-a61f-fc232791fe54","Type":"ContainerDied","Data":"ff73357f4a8926f1beb575d0c9327e0808997a00f71c5d102b31c0a5d11b38d4"} Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.389446 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.437248 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpjpg\" (UniqueName: \"kubernetes.io/projected/955fb591-0de6-4f55-a61f-fc232791fe54-kube-api-access-zpjpg\") pod \"955fb591-0de6-4f55-a61f-fc232791fe54\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.437346 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-ssh-key\") pod \"955fb591-0de6-4f55-a61f-fc232791fe54\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.437600 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-inventory\") pod \"955fb591-0de6-4f55-a61f-fc232791fe54\" (UID: \"955fb591-0de6-4f55-a61f-fc232791fe54\") " Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.442971 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955fb591-0de6-4f55-a61f-fc232791fe54-kube-api-access-zpjpg" (OuterVolumeSpecName: "kube-api-access-zpjpg") pod "955fb591-0de6-4f55-a61f-fc232791fe54" (UID: "955fb591-0de6-4f55-a61f-fc232791fe54"). InnerVolumeSpecName "kube-api-access-zpjpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.479867 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-inventory" (OuterVolumeSpecName: "inventory") pod "955fb591-0de6-4f55-a61f-fc232791fe54" (UID: "955fb591-0de6-4f55-a61f-fc232791fe54"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.482522 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "955fb591-0de6-4f55-a61f-fc232791fe54" (UID: "955fb591-0de6-4f55-a61f-fc232791fe54"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.540105 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.540146 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpjpg\" (UniqueName: \"kubernetes.io/projected/955fb591-0de6-4f55-a61f-fc232791fe54-kube-api-access-zpjpg\") on node \"crc\" DevicePath \"\"" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.540157 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/955fb591-0de6-4f55-a61f-fc232791fe54-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.961216 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" event={"ID":"955fb591-0de6-4f55-a61f-fc232791fe54","Type":"ContainerDied","Data":"82cee8690186ba7292445c6ca0c77bb7997bbe65a9c772a65b7c7e6e7aa58a2c"} Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.961259 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82cee8690186ba7292445c6ca0c77bb7997bbe65a9c772a65b7c7e6e7aa58a2c" Nov 29 07:55:13 crc kubenswrapper[4660]: I1129 07:55:13.961331 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thxwk" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.043994 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9"] Nov 29 07:55:14 crc kubenswrapper[4660]: E1129 07:55:14.044418 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955fb591-0de6-4f55-a61f-fc232791fe54" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.044437 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="955fb591-0de6-4f55-a61f-fc232791fe54" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.044640 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="955fb591-0de6-4f55-a61f-fc232791fe54" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.045271 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.047624 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.047829 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.047748 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.055827 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.068366 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9"] Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.157087 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7wvf\" (UniqueName: \"kubernetes.io/projected/b6e39886-2df6-4257-babe-441252581041-kube-api-access-b7wvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.157762 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.157870 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.259838 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7wvf\" (UniqueName: \"kubernetes.io/projected/b6e39886-2df6-4257-babe-441252581041-kube-api-access-b7wvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.260128 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.260308 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.265038 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.267684 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.277128 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7wvf\" (UniqueName: \"kubernetes.io/projected/b6e39886-2df6-4257-babe-441252581041-kube-api-access-b7wvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.360118 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.937264 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9"] Nov 29 07:55:14 crc kubenswrapper[4660]: I1129 07:55:14.975448 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" event={"ID":"b6e39886-2df6-4257-babe-441252581041","Type":"ContainerStarted","Data":"786c7b835411cc85187a6b9517b5535a3f8e770a60b9d618fbe81781bd9d136d"} Nov 29 07:55:17 crc kubenswrapper[4660]: I1129 07:55:17.838935 4660 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bhg29 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 29 07:55:17 crc kubenswrapper[4660]: I1129 07:55:17.839379 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bhg29" podUID="a2998d6f-01b6-4b4a-a5ca-44412d764e16" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 29 07:55:22 crc kubenswrapper[4660]: I1129 07:55:22.032121 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" event={"ID":"b6e39886-2df6-4257-babe-441252581041","Type":"ContainerStarted","Data":"aa501471c901d3cab40f1ecb55d7bf54db0784246afa1dd742d5592c3494afff"} Nov 29 07:55:22 crc kubenswrapper[4660]: I1129 07:55:22.073753 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" podStartSLOduration=1.910567038 podStartE2EDuration="8.073729667s" podCreationTimestamp="2025-11-29 07:55:14 +0000 UTC" firstStartedPulling="2025-11-29 07:55:14.94069696 +0000 UTC m=+2405.494226859" lastFinishedPulling="2025-11-29 07:55:21.103859589 +0000 UTC m=+2411.657389488" observedRunningTime="2025-11-29 07:55:22.060337137 +0000 UTC m=+2412.613867046" watchObservedRunningTime="2025-11-29 07:55:22.073729667 +0000 UTC m=+2412.627259566" Nov 29 07:55:22 crc kubenswrapper[4660]: I1129 07:55:22.693945 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:55:22 crc kubenswrapper[4660]: E1129 07:55:22.694380 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:55:36 crc kubenswrapper[4660]: I1129 07:55:36.694073 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:55:36 crc kubenswrapper[4660]: E1129 07:55:36.694934 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:55:51 crc kubenswrapper[4660]: I1129 07:55:51.693601 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:55:51 crc kubenswrapper[4660]: E1129 07:55:51.694431 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:56:04 crc kubenswrapper[4660]: I1129 07:56:04.695350 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:56:04 crc kubenswrapper[4660]: E1129 07:56:04.696752 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:56:18 crc kubenswrapper[4660]: I1129 07:56:18.693708 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:56:18 crc kubenswrapper[4660]: E1129 07:56:18.694903 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:56:25 crc kubenswrapper[4660]: I1129 07:56:25.571775 4660 generic.go:334] "Generic (PLEG): container finished" podID="b6e39886-2df6-4257-babe-441252581041" containerID="aa501471c901d3cab40f1ecb55d7bf54db0784246afa1dd742d5592c3494afff" exitCode=0 Nov 29 07:56:25 crc kubenswrapper[4660]: I1129 07:56:25.571884 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" event={"ID":"b6e39886-2df6-4257-babe-441252581041","Type":"ContainerDied","Data":"aa501471c901d3cab40f1ecb55d7bf54db0784246afa1dd742d5592c3494afff"} Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.055740 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.224960 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-inventory\") pod \"b6e39886-2df6-4257-babe-441252581041\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.225119 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7wvf\" (UniqueName: \"kubernetes.io/projected/b6e39886-2df6-4257-babe-441252581041-kube-api-access-b7wvf\") pod \"b6e39886-2df6-4257-babe-441252581041\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.225259 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-ssh-key\") pod \"b6e39886-2df6-4257-babe-441252581041\" (UID: \"b6e39886-2df6-4257-babe-441252581041\") " Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.232437 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e39886-2df6-4257-babe-441252581041-kube-api-access-b7wvf" (OuterVolumeSpecName: "kube-api-access-b7wvf") pod "b6e39886-2df6-4257-babe-441252581041" (UID: "b6e39886-2df6-4257-babe-441252581041"). InnerVolumeSpecName "kube-api-access-b7wvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.256751 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-inventory" (OuterVolumeSpecName: "inventory") pod "b6e39886-2df6-4257-babe-441252581041" (UID: "b6e39886-2df6-4257-babe-441252581041"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.260889 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b6e39886-2df6-4257-babe-441252581041" (UID: "b6e39886-2df6-4257-babe-441252581041"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.327973 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.328011 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6e39886-2df6-4257-babe-441252581041-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.328024 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7wvf\" (UniqueName: \"kubernetes.io/projected/b6e39886-2df6-4257-babe-441252581041-kube-api-access-b7wvf\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.593292 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" event={"ID":"b6e39886-2df6-4257-babe-441252581041","Type":"ContainerDied","Data":"786c7b835411cc85187a6b9517b5535a3f8e770a60b9d618fbe81781bd9d136d"} Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.593714 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="786c7b835411cc85187a6b9517b5535a3f8e770a60b9d618fbe81781bd9d136d" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.593370 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.704855 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hcf98"] Nov 29 07:56:27 crc kubenswrapper[4660]: E1129 07:56:27.705316 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e39886-2df6-4257-babe-441252581041" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.705340 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e39886-2df6-4257-babe-441252581041" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.705568 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e39886-2df6-4257-babe-441252581041" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.706327 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.712368 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.712580 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.712660 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.712599 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.714279 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hcf98"] Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.844043 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.844358 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdcr7\" (UniqueName: \"kubernetes.io/projected/4118c243-9402-4481-abdd-0a5d0581415b-kube-api-access-bdcr7\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.844467 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.946698 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.946818 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.946859 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdcr7\" (UniqueName: \"kubernetes.io/projected/4118c243-9402-4481-abdd-0a5d0581415b-kube-api-access-bdcr7\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.951253 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.956189 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:27 crc kubenswrapper[4660]: I1129 07:56:27.965521 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdcr7\" (UniqueName: \"kubernetes.io/projected/4118c243-9402-4481-abdd-0a5d0581415b-kube-api-access-bdcr7\") pod \"ssh-known-hosts-edpm-deployment-hcf98\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:28 crc kubenswrapper[4660]: I1129 07:56:28.044390 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:28 crc kubenswrapper[4660]: I1129 07:56:28.600359 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hcf98"] Nov 29 07:56:28 crc kubenswrapper[4660]: W1129 07:56:28.609568 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4118c243_9402_4481_abdd_0a5d0581415b.slice/crio-f450ce5dd4e4854b19d6c200ab769938ecd59180bda5ba32524ba3fac2f50be7 WatchSource:0}: Error finding container f450ce5dd4e4854b19d6c200ab769938ecd59180bda5ba32524ba3fac2f50be7: Status 404 returned error can't find the container with id f450ce5dd4e4854b19d6c200ab769938ecd59180bda5ba32524ba3fac2f50be7 Nov 29 07:56:28 crc kubenswrapper[4660]: I1129 07:56:28.612910 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 07:56:29 crc kubenswrapper[4660]: I1129 07:56:29.611778 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" event={"ID":"4118c243-9402-4481-abdd-0a5d0581415b","Type":"ContainerStarted","Data":"f450ce5dd4e4854b19d6c200ab769938ecd59180bda5ba32524ba3fac2f50be7"} Nov 29 07:56:30 crc kubenswrapper[4660]: I1129 07:56:30.623016 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" event={"ID":"4118c243-9402-4481-abdd-0a5d0581415b","Type":"ContainerStarted","Data":"b3cd86ff3f6185481c4ffaf75f2e846ff3a066f9ddd654fe27355415181d5bf2"} Nov 29 07:56:30 crc kubenswrapper[4660]: I1129 07:56:30.656060 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" podStartSLOduration=2.073700368 podStartE2EDuration="3.655981238s" podCreationTimestamp="2025-11-29 07:56:27 +0000 UTC" firstStartedPulling="2025-11-29 07:56:28.612669615 +0000 UTC m=+2479.166199524" lastFinishedPulling="2025-11-29 07:56:30.194950495 +0000 UTC m=+2480.748480394" observedRunningTime="2025-11-29 07:56:30.647168672 +0000 UTC m=+2481.200698571" watchObservedRunningTime="2025-11-29 07:56:30.655981238 +0000 UTC m=+2481.209511177" Nov 29 07:56:30 crc kubenswrapper[4660]: I1129 07:56:30.695863 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:56:30 crc kubenswrapper[4660]: E1129 07:56:30.697171 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:56:38 crc kubenswrapper[4660]: I1129 07:56:38.708222 4660 generic.go:334] "Generic (PLEG): container finished" podID="4118c243-9402-4481-abdd-0a5d0581415b" containerID="b3cd86ff3f6185481c4ffaf75f2e846ff3a066f9ddd654fe27355415181d5bf2" exitCode=0 Nov 29 07:56:38 crc kubenswrapper[4660]: I1129 07:56:38.708486 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" event={"ID":"4118c243-9402-4481-abdd-0a5d0581415b","Type":"ContainerDied","Data":"b3cd86ff3f6185481c4ffaf75f2e846ff3a066f9ddd654fe27355415181d5bf2"} Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.131628 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.286944 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-inventory-0\") pod \"4118c243-9402-4481-abdd-0a5d0581415b\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.287072 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdcr7\" (UniqueName: \"kubernetes.io/projected/4118c243-9402-4481-abdd-0a5d0581415b-kube-api-access-bdcr7\") pod \"4118c243-9402-4481-abdd-0a5d0581415b\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.287215 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-ssh-key-openstack-edpm-ipam\") pod \"4118c243-9402-4481-abdd-0a5d0581415b\" (UID: \"4118c243-9402-4481-abdd-0a5d0581415b\") " Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.294841 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4118c243-9402-4481-abdd-0a5d0581415b-kube-api-access-bdcr7" (OuterVolumeSpecName: "kube-api-access-bdcr7") pod "4118c243-9402-4481-abdd-0a5d0581415b" (UID: "4118c243-9402-4481-abdd-0a5d0581415b"). InnerVolumeSpecName "kube-api-access-bdcr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.312983 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4118c243-9402-4481-abdd-0a5d0581415b" (UID: "4118c243-9402-4481-abdd-0a5d0581415b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.318856 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4118c243-9402-4481-abdd-0a5d0581415b" (UID: "4118c243-9402-4481-abdd-0a5d0581415b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.389409 4660 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.389440 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdcr7\" (UniqueName: \"kubernetes.io/projected/4118c243-9402-4481-abdd-0a5d0581415b-kube-api-access-bdcr7\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.389451 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4118c243-9402-4481-abdd-0a5d0581415b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.728917 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" event={"ID":"4118c243-9402-4481-abdd-0a5d0581415b","Type":"ContainerDied","Data":"f450ce5dd4e4854b19d6c200ab769938ecd59180bda5ba32524ba3fac2f50be7"} Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.728961 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f450ce5dd4e4854b19d6c200ab769938ecd59180bda5ba32524ba3fac2f50be7" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.729114 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hcf98" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.808311 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b"] Nov 29 07:56:40 crc kubenswrapper[4660]: E1129 07:56:40.808808 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4118c243-9402-4481-abdd-0a5d0581415b" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.808836 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4118c243-9402-4481-abdd-0a5d0581415b" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.809064 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="4118c243-9402-4481-abdd-0a5d0581415b" containerName="ssh-known-hosts-edpm-deployment" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.809894 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.813009 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.813780 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.814051 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.814243 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.820290 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b"] Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.898982 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.899088 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sdb\" (UniqueName: \"kubernetes.io/projected/6a6ef616-fee3-4bcb-acef-c63943b96e22-kube-api-access-t4sdb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:40 crc kubenswrapper[4660]: I1129 07:56:40.899139 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.001931 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.002068 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.002195 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4sdb\" (UniqueName: \"kubernetes.io/projected/6a6ef616-fee3-4bcb-acef-c63943b96e22-kube-api-access-t4sdb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.006392 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.006909 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.023165 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4sdb\" (UniqueName: \"kubernetes.io/projected/6a6ef616-fee3-4bcb-acef-c63943b96e22-kube-api-access-t4sdb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tff2b\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.135012 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.666472 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b"] Nov 29 07:56:41 crc kubenswrapper[4660]: I1129 07:56:41.737905 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" event={"ID":"6a6ef616-fee3-4bcb-acef-c63943b96e22","Type":"ContainerStarted","Data":"8723cb15719c80f7f58c9af8c5714e806677d67de8050966bdedbad3fd227fc6"} Nov 29 07:56:43 crc kubenswrapper[4660]: I1129 07:56:43.754938 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" event={"ID":"6a6ef616-fee3-4bcb-acef-c63943b96e22","Type":"ContainerStarted","Data":"7bfc8bb71986f54cd82e63d030f672aa40ef584d0f3af35c5e9bbde2a3281406"} Nov 29 07:56:43 crc kubenswrapper[4660]: I1129 07:56:43.779998 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" podStartSLOduration=3.253934699 podStartE2EDuration="3.779981376s" podCreationTimestamp="2025-11-29 07:56:40 +0000 UTC" firstStartedPulling="2025-11-29 07:56:41.678723509 +0000 UTC m=+2492.232253408" lastFinishedPulling="2025-11-29 07:56:42.204770186 +0000 UTC m=+2492.758300085" observedRunningTime="2025-11-29 07:56:43.776255996 +0000 UTC m=+2494.329785895" watchObservedRunningTime="2025-11-29 07:56:43.779981376 +0000 UTC m=+2494.333511275" Nov 29 07:56:44 crc kubenswrapper[4660]: I1129 07:56:44.696328 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:56:44 crc kubenswrapper[4660]: E1129 07:56:44.696653 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:56:52 crc kubenswrapper[4660]: I1129 07:56:52.834114 4660 generic.go:334] "Generic (PLEG): container finished" podID="6a6ef616-fee3-4bcb-acef-c63943b96e22" containerID="7bfc8bb71986f54cd82e63d030f672aa40ef584d0f3af35c5e9bbde2a3281406" exitCode=0 Nov 29 07:56:52 crc kubenswrapper[4660]: I1129 07:56:52.834719 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" event={"ID":"6a6ef616-fee3-4bcb-acef-c63943b96e22","Type":"ContainerDied","Data":"7bfc8bb71986f54cd82e63d030f672aa40ef584d0f3af35c5e9bbde2a3281406"} Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.261903 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.357493 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-inventory\") pod \"6a6ef616-fee3-4bcb-acef-c63943b96e22\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.357592 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4sdb\" (UniqueName: \"kubernetes.io/projected/6a6ef616-fee3-4bcb-acef-c63943b96e22-kube-api-access-t4sdb\") pod \"6a6ef616-fee3-4bcb-acef-c63943b96e22\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.357726 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-ssh-key\") pod \"6a6ef616-fee3-4bcb-acef-c63943b96e22\" (UID: \"6a6ef616-fee3-4bcb-acef-c63943b96e22\") " Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.366923 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6ef616-fee3-4bcb-acef-c63943b96e22-kube-api-access-t4sdb" (OuterVolumeSpecName: "kube-api-access-t4sdb") pod "6a6ef616-fee3-4bcb-acef-c63943b96e22" (UID: "6a6ef616-fee3-4bcb-acef-c63943b96e22"). InnerVolumeSpecName "kube-api-access-t4sdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.390531 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a6ef616-fee3-4bcb-acef-c63943b96e22" (UID: "6a6ef616-fee3-4bcb-acef-c63943b96e22"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.407512 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-inventory" (OuterVolumeSpecName: "inventory") pod "6a6ef616-fee3-4bcb-acef-c63943b96e22" (UID: "6a6ef616-fee3-4bcb-acef-c63943b96e22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.460194 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4sdb\" (UniqueName: \"kubernetes.io/projected/6a6ef616-fee3-4bcb-acef-c63943b96e22-kube-api-access-t4sdb\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.460239 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.460252 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a6ef616-fee3-4bcb-acef-c63943b96e22-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.852984 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" event={"ID":"6a6ef616-fee3-4bcb-acef-c63943b96e22","Type":"ContainerDied","Data":"8723cb15719c80f7f58c9af8c5714e806677d67de8050966bdedbad3fd227fc6"} Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.853022 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8723cb15719c80f7f58c9af8c5714e806677d67de8050966bdedbad3fd227fc6" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.853401 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tff2b" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.936633 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x"] Nov 29 07:56:54 crc kubenswrapper[4660]: E1129 07:56:54.937036 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6ef616-fee3-4bcb-acef-c63943b96e22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.937056 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6ef616-fee3-4bcb-acef-c63943b96e22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.937254 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6ef616-fee3-4bcb-acef-c63943b96e22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.937866 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.940148 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.940448 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.940798 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.941779 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:56:54 crc kubenswrapper[4660]: I1129 07:56:54.953392 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x"] Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.070693 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.070757 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjtwj\" (UniqueName: \"kubernetes.io/projected/36029f28-c187-4b77-afda-fd74d56bd1c5-kube-api-access-wjtwj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.070955 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.172390 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.172492 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.172523 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtwj\" (UniqueName: \"kubernetes.io/projected/36029f28-c187-4b77-afda-fd74d56bd1c5-kube-api-access-wjtwj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.187802 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.187867 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.196032 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtwj\" (UniqueName: \"kubernetes.io/projected/36029f28-c187-4b77-afda-fd74d56bd1c5-kube-api-access-wjtwj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.303196 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.858270 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x"] Nov 29 07:56:55 crc kubenswrapper[4660]: I1129 07:56:55.874492 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" event={"ID":"36029f28-c187-4b77-afda-fd74d56bd1c5","Type":"ContainerStarted","Data":"e497c0254036e015e9f08d45ff003faa1784d382676fd986118cff0c143b0d58"} Nov 29 07:56:56 crc kubenswrapper[4660]: I1129 07:56:56.884799 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" event={"ID":"36029f28-c187-4b77-afda-fd74d56bd1c5","Type":"ContainerStarted","Data":"0463f5348435aaaabb2d2415afaf5fe8473d95e3ee8f5c2109ad12985d5b3abd"} Nov 29 07:56:56 crc kubenswrapper[4660]: I1129 07:56:56.905777 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" podStartSLOduration=2.398462528 podStartE2EDuration="2.905759901s" podCreationTimestamp="2025-11-29 07:56:54 +0000 UTC" firstStartedPulling="2025-11-29 07:56:55.861956311 +0000 UTC m=+2506.415486210" lastFinishedPulling="2025-11-29 07:56:56.369253674 +0000 UTC m=+2506.922783583" observedRunningTime="2025-11-29 07:56:56.903958353 +0000 UTC m=+2507.457488252" watchObservedRunningTime="2025-11-29 07:56:56.905759901 +0000 UTC m=+2507.459289800" Nov 29 07:56:57 crc kubenswrapper[4660]: I1129 07:56:57.694312 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:56:57 crc kubenswrapper[4660]: E1129 07:56:57.694918 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:57:08 crc kubenswrapper[4660]: I1129 07:57:08.988694 4660 generic.go:334] "Generic (PLEG): container finished" podID="36029f28-c187-4b77-afda-fd74d56bd1c5" containerID="0463f5348435aaaabb2d2415afaf5fe8473d95e3ee8f5c2109ad12985d5b3abd" exitCode=0 Nov 29 07:57:08 crc kubenswrapper[4660]: I1129 07:57:08.988911 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" event={"ID":"36029f28-c187-4b77-afda-fd74d56bd1c5","Type":"ContainerDied","Data":"0463f5348435aaaabb2d2415afaf5fe8473d95e3ee8f5c2109ad12985d5b3abd"} Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.428000 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.506166 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-inventory\") pod \"36029f28-c187-4b77-afda-fd74d56bd1c5\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.506246 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-ssh-key\") pod \"36029f28-c187-4b77-afda-fd74d56bd1c5\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.506403 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjtwj\" (UniqueName: \"kubernetes.io/projected/36029f28-c187-4b77-afda-fd74d56bd1c5-kube-api-access-wjtwj\") pod \"36029f28-c187-4b77-afda-fd74d56bd1c5\" (UID: \"36029f28-c187-4b77-afda-fd74d56bd1c5\") " Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.513591 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36029f28-c187-4b77-afda-fd74d56bd1c5-kube-api-access-wjtwj" (OuterVolumeSpecName: "kube-api-access-wjtwj") pod "36029f28-c187-4b77-afda-fd74d56bd1c5" (UID: "36029f28-c187-4b77-afda-fd74d56bd1c5"). InnerVolumeSpecName "kube-api-access-wjtwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.533271 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "36029f28-c187-4b77-afda-fd74d56bd1c5" (UID: "36029f28-c187-4b77-afda-fd74d56bd1c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.544789 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-inventory" (OuterVolumeSpecName: "inventory") pod "36029f28-c187-4b77-afda-fd74d56bd1c5" (UID: "36029f28-c187-4b77-afda-fd74d56bd1c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.609631 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.609667 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36029f28-c187-4b77-afda-fd74d56bd1c5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:10 crc kubenswrapper[4660]: I1129 07:57:10.609677 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjtwj\" (UniqueName: \"kubernetes.io/projected/36029f28-c187-4b77-afda-fd74d56bd1c5-kube-api-access-wjtwj\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.014740 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" event={"ID":"36029f28-c187-4b77-afda-fd74d56bd1c5","Type":"ContainerDied","Data":"e497c0254036e015e9f08d45ff003faa1784d382676fd986118cff0c143b0d58"} Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.015000 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e497c0254036e015e9f08d45ff003faa1784d382676fd986118cff0c143b0d58" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.014796 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.115355 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv"] Nov 29 07:57:11 crc kubenswrapper[4660]: E1129 07:57:11.116044 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36029f28-c187-4b77-afda-fd74d56bd1c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.116124 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="36029f28-c187-4b77-afda-fd74d56bd1c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.116388 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="36029f28-c187-4b77-afda-fd74d56bd1c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.117112 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.123659 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.123859 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.124023 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.126013 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.126157 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.126261 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.126570 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.128104 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.138185 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv"] Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.221749 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.221816 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.221874 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.221906 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9r4k\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-kube-api-access-g9r4k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.221931 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.221981 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222005 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222050 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222090 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222105 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222287 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222377 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222426 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.222450 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325111 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325173 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325225 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325449 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9r4k\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-kube-api-access-g9r4k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325486 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325533 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325570 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325627 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325860 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325887 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325953 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.325995 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.326031 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.326051 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.331806 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.332059 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.332504 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.332691 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.335237 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.337132 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.338187 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.338223 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.341376 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.341750 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.341792 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.343268 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.344962 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.349643 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9r4k\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-kube-api-access-g9r4k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.434039 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:11 crc kubenswrapper[4660]: I1129 07:57:11.693947 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:57:11 crc kubenswrapper[4660]: E1129 07:57:11.694717 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:57:12 crc kubenswrapper[4660]: I1129 07:57:12.008001 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv"] Nov 29 07:57:13 crc kubenswrapper[4660]: I1129 07:57:13.035539 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" event={"ID":"93142c96-03e4-4441-a738-407379eeb07f","Type":"ContainerStarted","Data":"629d87e3b5f1774fba64dbf1a510394415b7a1384a349920b8b27e68b95bd6cf"} Nov 29 07:57:15 crc kubenswrapper[4660]: I1129 07:57:15.056579 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" event={"ID":"93142c96-03e4-4441-a738-407379eeb07f","Type":"ContainerStarted","Data":"e1afb7bfd12c91214af498c1d31620acc786715f433715db23d8a1cd0844b604"} Nov 29 07:57:25 crc kubenswrapper[4660]: I1129 07:57:25.694410 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:57:25 crc kubenswrapper[4660]: E1129 07:57:25.695135 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:57:39 crc kubenswrapper[4660]: I1129 07:57:39.699444 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:57:39 crc kubenswrapper[4660]: E1129 07:57:39.700104 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:57:50 crc kubenswrapper[4660]: I1129 07:57:50.693326 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:57:50 crc kubenswrapper[4660]: E1129 07:57:50.694141 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:57:55 crc kubenswrapper[4660]: I1129 07:57:55.394562 4660 generic.go:334] "Generic (PLEG): container finished" podID="93142c96-03e4-4441-a738-407379eeb07f" containerID="e1afb7bfd12c91214af498c1d31620acc786715f433715db23d8a1cd0844b604" exitCode=0 Nov 29 07:57:55 crc kubenswrapper[4660]: I1129 07:57:55.394827 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" event={"ID":"93142c96-03e4-4441-a738-407379eeb07f","Type":"ContainerDied","Data":"e1afb7bfd12c91214af498c1d31620acc786715f433715db23d8a1cd0844b604"} Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.823135 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880036 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-inventory\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880093 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-neutron-metadata-combined-ca-bundle\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880124 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-repo-setup-combined-ca-bundle\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880148 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9r4k\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-kube-api-access-g9r4k\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880175 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880234 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-telemetry-combined-ca-bundle\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880256 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ssh-key\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880273 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ovn-combined-ca-bundle\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880290 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880326 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-nova-combined-ca-bundle\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880351 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880371 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880396 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-bootstrap-combined-ca-bundle\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.880424 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-libvirt-combined-ca-bundle\") pod \"93142c96-03e4-4441-a738-407379eeb07f\" (UID: \"93142c96-03e4-4441-a738-407379eeb07f\") " Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.887128 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.887427 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-kube-api-access-g9r4k" (OuterVolumeSpecName: "kube-api-access-g9r4k") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "kube-api-access-g9r4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.887638 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.888666 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.889964 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.891562 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.891583 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.891651 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.891980 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.892490 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.894006 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.901425 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.920540 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.922127 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-inventory" (OuterVolumeSpecName: "inventory") pod "93142c96-03e4-4441-a738-407379eeb07f" (UID: "93142c96-03e4-4441-a738-407379eeb07f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983071 4660 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983112 4660 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983125 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983139 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983150 4660 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983163 4660 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983178 4660 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983193 4660 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983205 4660 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983217 4660 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983241 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983255 4660 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983267 4660 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142c96-03e4-4441-a738-407379eeb07f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:56 crc kubenswrapper[4660]: I1129 07:57:56.983279 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9r4k\" (UniqueName: \"kubernetes.io/projected/93142c96-03e4-4441-a738-407379eeb07f-kube-api-access-g9r4k\") on node \"crc\" DevicePath \"\"" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.411689 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" event={"ID":"93142c96-03e4-4441-a738-407379eeb07f","Type":"ContainerDied","Data":"629d87e3b5f1774fba64dbf1a510394415b7a1384a349920b8b27e68b95bd6cf"} Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.412060 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629d87e3b5f1774fba64dbf1a510394415b7a1384a349920b8b27e68b95bd6cf" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.411730 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.538013 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58"] Nov 29 07:57:57 crc kubenswrapper[4660]: E1129 07:57:57.538470 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93142c96-03e4-4441-a738-407379eeb07f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.538492 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="93142c96-03e4-4441-a738-407379eeb07f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.538773 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="93142c96-03e4-4441-a738-407379eeb07f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.539571 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.542697 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.542701 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.543053 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.543206 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.544774 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.548847 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58"] Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.694179 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62033900-fce1-44ce-9b4b-44d61b45123c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.694292 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.694338 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.694387 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7h6r\" (UniqueName: \"kubernetes.io/projected/62033900-fce1-44ce-9b4b-44d61b45123c-kube-api-access-q7h6r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.694410 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.795478 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62033900-fce1-44ce-9b4b-44d61b45123c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.795592 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.795669 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.795742 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7h6r\" (UniqueName: \"kubernetes.io/projected/62033900-fce1-44ce-9b4b-44d61b45123c-kube-api-access-q7h6r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.795777 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.796670 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62033900-fce1-44ce-9b4b-44d61b45123c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.799908 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.801956 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.808401 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.812655 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7h6r\" (UniqueName: \"kubernetes.io/projected/62033900-fce1-44ce-9b4b-44d61b45123c-kube-api-access-q7h6r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-llf58\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:57 crc kubenswrapper[4660]: I1129 07:57:57.866110 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:57:58 crc kubenswrapper[4660]: I1129 07:57:58.358642 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58"] Nov 29 07:57:58 crc kubenswrapper[4660]: I1129 07:57:58.421166 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" event={"ID":"62033900-fce1-44ce-9b4b-44d61b45123c","Type":"ContainerStarted","Data":"b87359bff871967a0f728078729d3dd57d7e82257f2eb40af18d6c5075fbc345"} Nov 29 07:57:59 crc kubenswrapper[4660]: I1129 07:57:59.431044 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" event={"ID":"62033900-fce1-44ce-9b4b-44d61b45123c","Type":"ContainerStarted","Data":"67bbda7c318d703f341012a6fcce3736e09173d0e960184de4f1f10231c8fe3e"} Nov 29 07:57:59 crc kubenswrapper[4660]: I1129 07:57:59.453185 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" podStartSLOduration=1.983662935 podStartE2EDuration="2.453164725s" podCreationTimestamp="2025-11-29 07:57:57 +0000 UTC" firstStartedPulling="2025-11-29 07:57:58.370852352 +0000 UTC m=+2568.924382251" lastFinishedPulling="2025-11-29 07:57:58.840354142 +0000 UTC m=+2569.393884041" observedRunningTime="2025-11-29 07:57:59.447347119 +0000 UTC m=+2570.000877018" watchObservedRunningTime="2025-11-29 07:57:59.453164725 +0000 UTC m=+2570.006694634" Nov 29 07:58:02 crc kubenswrapper[4660]: I1129 07:58:02.693412 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:58:02 crc kubenswrapper[4660]: E1129 07:58:02.694109 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:58:16 crc kubenswrapper[4660]: I1129 07:58:16.693503 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:58:16 crc kubenswrapper[4660]: E1129 07:58:16.694480 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:58:30 crc kubenswrapper[4660]: I1129 07:58:30.693141 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:58:30 crc kubenswrapper[4660]: E1129 07:58:30.693957 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 07:58:42 crc kubenswrapper[4660]: I1129 07:58:42.693673 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 07:58:43 crc kubenswrapper[4660]: I1129 07:58:43.809061 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"964fa4f27f9fccff7a7ec7611667e4e2fccc5368071272aaffbbf88cdb3c85b0"} Nov 29 07:59:01 crc kubenswrapper[4660]: I1129 07:59:01.506942 4660 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.496771192s: [/var/lib/containers/storage/overlay/f5c21af4d6dc390c763608138148bda47181ad45487f1f81ea26d4e266e117b6/diff /var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-llf58_62033900-fce1-44ce-9b4b-44d61b45123c/ovn-edpm-deployment-openstack-edpm-ipam/0.log]; will not log again for this container unless duration exceeds 2s Nov 29 07:59:17 crc kubenswrapper[4660]: I1129 07:59:17.118760 4660 generic.go:334] "Generic (PLEG): container finished" podID="62033900-fce1-44ce-9b4b-44d61b45123c" containerID="67bbda7c318d703f341012a6fcce3736e09173d0e960184de4f1f10231c8fe3e" exitCode=0 Nov 29 07:59:17 crc kubenswrapper[4660]: I1129 07:59:17.118851 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" event={"ID":"62033900-fce1-44ce-9b4b-44d61b45123c","Type":"ContainerDied","Data":"67bbda7c318d703f341012a6fcce3736e09173d0e960184de4f1f10231c8fe3e"} Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.536902 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.585690 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ssh-key\") pod \"62033900-fce1-44ce-9b4b-44d61b45123c\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.585746 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ovn-combined-ca-bundle\") pod \"62033900-fce1-44ce-9b4b-44d61b45123c\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.585933 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62033900-fce1-44ce-9b4b-44d61b45123c-ovncontroller-config-0\") pod \"62033900-fce1-44ce-9b4b-44d61b45123c\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.585983 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7h6r\" (UniqueName: \"kubernetes.io/projected/62033900-fce1-44ce-9b4b-44d61b45123c-kube-api-access-q7h6r\") pod \"62033900-fce1-44ce-9b4b-44d61b45123c\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.586082 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-inventory\") pod \"62033900-fce1-44ce-9b4b-44d61b45123c\" (UID: \"62033900-fce1-44ce-9b4b-44d61b45123c\") " Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.592977 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62033900-fce1-44ce-9b4b-44d61b45123c-kube-api-access-q7h6r" (OuterVolumeSpecName: "kube-api-access-q7h6r") pod "62033900-fce1-44ce-9b4b-44d61b45123c" (UID: "62033900-fce1-44ce-9b4b-44d61b45123c"). InnerVolumeSpecName "kube-api-access-q7h6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.611277 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "62033900-fce1-44ce-9b4b-44d61b45123c" (UID: "62033900-fce1-44ce-9b4b-44d61b45123c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.614881 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "62033900-fce1-44ce-9b4b-44d61b45123c" (UID: "62033900-fce1-44ce-9b4b-44d61b45123c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.614913 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62033900-fce1-44ce-9b4b-44d61b45123c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "62033900-fce1-44ce-9b4b-44d61b45123c" (UID: "62033900-fce1-44ce-9b4b-44d61b45123c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.617683 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-inventory" (OuterVolumeSpecName: "inventory") pod "62033900-fce1-44ce-9b4b-44d61b45123c" (UID: "62033900-fce1-44ce-9b4b-44d61b45123c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.687568 4660 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62033900-fce1-44ce-9b4b-44d61b45123c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.687599 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7h6r\" (UniqueName: \"kubernetes.io/projected/62033900-fce1-44ce-9b4b-44d61b45123c-kube-api-access-q7h6r\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.687627 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.687638 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:18 crc kubenswrapper[4660]: I1129 07:59:18.687649 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62033900-fce1-44ce-9b4b-44d61b45123c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.140712 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" event={"ID":"62033900-fce1-44ce-9b4b-44d61b45123c","Type":"ContainerDied","Data":"b87359bff871967a0f728078729d3dd57d7e82257f2eb40af18d6c5075fbc345"} Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.140773 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b87359bff871967a0f728078729d3dd57d7e82257f2eb40af18d6c5075fbc345" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.140795 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-llf58" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.286987 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj"] Nov 29 07:59:19 crc kubenswrapper[4660]: E1129 07:59:19.287472 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62033900-fce1-44ce-9b4b-44d61b45123c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.287496 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="62033900-fce1-44ce-9b4b-44d61b45123c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.287745 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="62033900-fce1-44ce-9b4b-44d61b45123c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.288451 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.292091 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.292131 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.292587 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.293202 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.293724 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.294209 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.297709 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq4f\" (UniqueName: \"kubernetes.io/projected/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-kube-api-access-nmq4f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.297788 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.297919 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.297989 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.298066 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.298107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.299429 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj"] Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.400521 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.400583 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.400680 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmq4f\" (UniqueName: \"kubernetes.io/projected/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-kube-api-access-nmq4f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.400814 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.400902 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.400976 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.404600 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.404671 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.413081 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.413728 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.418147 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.418438 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmq4f\" (UniqueName: \"kubernetes.io/projected/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-kube-api-access-nmq4f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:19 crc kubenswrapper[4660]: I1129 07:59:19.608510 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 07:59:20 crc kubenswrapper[4660]: I1129 07:59:20.179291 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj"] Nov 29 07:59:21 crc kubenswrapper[4660]: I1129 07:59:21.158898 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" event={"ID":"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7","Type":"ContainerStarted","Data":"315f635c476590720ec1cf1e5e9d5007f669b3e8a03a1b88413063f356a314a4"} Nov 29 07:59:27 crc kubenswrapper[4660]: I1129 07:59:27.214868 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" event={"ID":"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7","Type":"ContainerStarted","Data":"717c2cbef71cec3c57f57f17b9cd82f810ced0f21302b0675b821c6915541bc4"} Nov 29 07:59:27 crc kubenswrapper[4660]: I1129 07:59:27.239052 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" podStartSLOduration=2.401523226 podStartE2EDuration="8.239028541s" podCreationTimestamp="2025-11-29 07:59:19 +0000 UTC" firstStartedPulling="2025-11-29 07:59:20.175254302 +0000 UTC m=+2650.728784201" lastFinishedPulling="2025-11-29 07:59:26.012759617 +0000 UTC m=+2656.566289516" observedRunningTime="2025-11-29 07:59:27.232505845 +0000 UTC m=+2657.786035784" watchObservedRunningTime="2025-11-29 07:59:27.239028541 +0000 UTC m=+2657.792558440" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.146269 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl"] Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.150750 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.154991 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.155526 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.176686 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl"] Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.253889 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf9z7\" (UniqueName: \"kubernetes.io/projected/64c89ca1-3dda-4e37-994b-892b7207d9b3-kube-api-access-jf9z7\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.254022 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c89ca1-3dda-4e37-994b-892b7207d9b3-config-volume\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.254061 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c89ca1-3dda-4e37-994b-892b7207d9b3-secret-volume\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.356120 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf9z7\" (UniqueName: \"kubernetes.io/projected/64c89ca1-3dda-4e37-994b-892b7207d9b3-kube-api-access-jf9z7\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.356194 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c89ca1-3dda-4e37-994b-892b7207d9b3-config-volume\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.356223 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c89ca1-3dda-4e37-994b-892b7207d9b3-secret-volume\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.357483 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c89ca1-3dda-4e37-994b-892b7207d9b3-config-volume\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.363719 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c89ca1-3dda-4e37-994b-892b7207d9b3-secret-volume\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.380356 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf9z7\" (UniqueName: \"kubernetes.io/projected/64c89ca1-3dda-4e37-994b-892b7207d9b3-kube-api-access-jf9z7\") pod \"collect-profiles-29406720-hrfrl\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:00 crc kubenswrapper[4660]: I1129 08:00:00.472785 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:01 crc kubenswrapper[4660]: I1129 08:00:01.004288 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl"] Nov 29 08:00:01 crc kubenswrapper[4660]: W1129 08:00:01.014928 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64c89ca1_3dda_4e37_994b_892b7207d9b3.slice/crio-ca55d4db0d0d789604b9b3cb4b05a19e333f271acac615e6cc8d78da1d437894 WatchSource:0}: Error finding container ca55d4db0d0d789604b9b3cb4b05a19e333f271acac615e6cc8d78da1d437894: Status 404 returned error can't find the container with id ca55d4db0d0d789604b9b3cb4b05a19e333f271acac615e6cc8d78da1d437894 Nov 29 08:00:01 crc kubenswrapper[4660]: I1129 08:00:01.536213 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" event={"ID":"64c89ca1-3dda-4e37-994b-892b7207d9b3","Type":"ContainerStarted","Data":"ca55d4db0d0d789604b9b3cb4b05a19e333f271acac615e6cc8d78da1d437894"} Nov 29 08:00:02 crc kubenswrapper[4660]: I1129 08:00:02.554355 4660 generic.go:334] "Generic (PLEG): container finished" podID="64c89ca1-3dda-4e37-994b-892b7207d9b3" containerID="6557e244f1e33b5be6330b90898ba271d92eb5c83bd153cffd5e0679cf7cbfb9" exitCode=0 Nov 29 08:00:02 crc kubenswrapper[4660]: I1129 08:00:02.554469 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" event={"ID":"64c89ca1-3dda-4e37-994b-892b7207d9b3","Type":"ContainerDied","Data":"6557e244f1e33b5be6330b90898ba271d92eb5c83bd153cffd5e0679cf7cbfb9"} Nov 29 08:00:03 crc kubenswrapper[4660]: I1129 08:00:03.886469 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.028842 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c89ca1-3dda-4e37-994b-892b7207d9b3-secret-volume\") pod \"64c89ca1-3dda-4e37-994b-892b7207d9b3\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.028975 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c89ca1-3dda-4e37-994b-892b7207d9b3-config-volume\") pod \"64c89ca1-3dda-4e37-994b-892b7207d9b3\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.029066 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf9z7\" (UniqueName: \"kubernetes.io/projected/64c89ca1-3dda-4e37-994b-892b7207d9b3-kube-api-access-jf9z7\") pod \"64c89ca1-3dda-4e37-994b-892b7207d9b3\" (UID: \"64c89ca1-3dda-4e37-994b-892b7207d9b3\") " Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.031124 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c89ca1-3dda-4e37-994b-892b7207d9b3-config-volume" (OuterVolumeSpecName: "config-volume") pod "64c89ca1-3dda-4e37-994b-892b7207d9b3" (UID: "64c89ca1-3dda-4e37-994b-892b7207d9b3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.035462 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c89ca1-3dda-4e37-994b-892b7207d9b3-kube-api-access-jf9z7" (OuterVolumeSpecName: "kube-api-access-jf9z7") pod "64c89ca1-3dda-4e37-994b-892b7207d9b3" (UID: "64c89ca1-3dda-4e37-994b-892b7207d9b3"). InnerVolumeSpecName "kube-api-access-jf9z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.035963 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c89ca1-3dda-4e37-994b-892b7207d9b3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "64c89ca1-3dda-4e37-994b-892b7207d9b3" (UID: "64c89ca1-3dda-4e37-994b-892b7207d9b3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.131479 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c89ca1-3dda-4e37-994b-892b7207d9b3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.131514 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c89ca1-3dda-4e37-994b-892b7207d9b3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.131555 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf9z7\" (UniqueName: \"kubernetes.io/projected/64c89ca1-3dda-4e37-994b-892b7207d9b3-kube-api-access-jf9z7\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.574505 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" event={"ID":"64c89ca1-3dda-4e37-994b-892b7207d9b3","Type":"ContainerDied","Data":"ca55d4db0d0d789604b9b3cb4b05a19e333f271acac615e6cc8d78da1d437894"} Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.574555 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca55d4db0d0d789604b9b3cb4b05a19e333f271acac615e6cc8d78da1d437894" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.574559 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406720-hrfrl" Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.958833 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7"] Nov 29 08:00:04 crc kubenswrapper[4660]: I1129 08:00:04.966326 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406675-pxtj7"] Nov 29 08:00:05 crc kubenswrapper[4660]: I1129 08:00:05.709475 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74fd06c4-6eb8-4056-ba52-e1260a0d4058" path="/var/lib/kubelet/pods/74fd06c4-6eb8-4056-ba52-e1260a0d4058/volumes" Nov 29 08:00:06 crc kubenswrapper[4660]: I1129 08:00:06.528523 4660 scope.go:117] "RemoveContainer" containerID="2bebb1b480c679df46386778a530c4125916fd0a57c9dc8d58752ea533d27abb" Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.833422 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mphvx"] Nov 29 08:00:17 crc kubenswrapper[4660]: E1129 08:00:17.834491 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c89ca1-3dda-4e37-994b-892b7207d9b3" containerName="collect-profiles" Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.834509 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c89ca1-3dda-4e37-994b-892b7207d9b3" containerName="collect-profiles" Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.834866 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c89ca1-3dda-4e37-994b-892b7207d9b3" containerName="collect-profiles" Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.836512 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.868000 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mphvx"] Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.989523 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-utilities\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.989568 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcv9\" (UniqueName: \"kubernetes.io/projected/fe2ff44a-d010-4456-827d-fe8ed8c0139d-kube-api-access-llcv9\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:17 crc kubenswrapper[4660]: I1129 08:00:17.989684 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-catalog-content\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.091348 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-catalog-content\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.091470 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-utilities\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.091490 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcv9\" (UniqueName: \"kubernetes.io/projected/fe2ff44a-d010-4456-827d-fe8ed8c0139d-kube-api-access-llcv9\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.091824 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-catalog-content\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.091946 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-utilities\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.112180 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcv9\" (UniqueName: \"kubernetes.io/projected/fe2ff44a-d010-4456-827d-fe8ed8c0139d-kube-api-access-llcv9\") pod \"redhat-operators-mphvx\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.192256 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.661087 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mphvx"] Nov 29 08:00:18 crc kubenswrapper[4660]: I1129 08:00:18.685978 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mphvx" event={"ID":"fe2ff44a-d010-4456-827d-fe8ed8c0139d","Type":"ContainerStarted","Data":"3eb4f053395a7984a8d84be78bbf12be3863de2d6a1d0c6b161801050512e4cd"} Nov 29 08:00:19 crc kubenswrapper[4660]: I1129 08:00:19.697382 4660 generic.go:334] "Generic (PLEG): container finished" podID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerID="c07c01ec38bd3bdc965b6e024d48823f570d38528f4d5b7b789ef9c61dd9d64a" exitCode=0 Nov 29 08:00:19 crc kubenswrapper[4660]: I1129 08:00:19.705257 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mphvx" event={"ID":"fe2ff44a-d010-4456-827d-fe8ed8c0139d","Type":"ContainerDied","Data":"c07c01ec38bd3bdc965b6e024d48823f570d38528f4d5b7b789ef9c61dd9d64a"} Nov 29 08:00:20 crc kubenswrapper[4660]: I1129 08:00:20.709275 4660 generic.go:334] "Generic (PLEG): container finished" podID="f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" containerID="717c2cbef71cec3c57f57f17b9cd82f810ced0f21302b0675b821c6915541bc4" exitCode=0 Nov 29 08:00:20 crc kubenswrapper[4660]: I1129 08:00:20.709322 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" event={"ID":"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7","Type":"ContainerDied","Data":"717c2cbef71cec3c57f57f17b9cd82f810ced0f21302b0675b821c6915541bc4"} Nov 29 08:00:21 crc kubenswrapper[4660]: I1129 08:00:21.718730 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mphvx" event={"ID":"fe2ff44a-d010-4456-827d-fe8ed8c0139d","Type":"ContainerStarted","Data":"7c5b2406a89779b3aedf5bb56ede28bddda955e861a46d4b5f2f61217d68c15c"} Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.126883 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.274717 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-nova-metadata-neutron-config-0\") pod \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.274874 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-inventory\") pod \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.274918 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-metadata-combined-ca-bundle\") pod \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.274971 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.275018 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmq4f\" (UniqueName: \"kubernetes.io/projected/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-kube-api-access-nmq4f\") pod \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.275054 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-ssh-key\") pod \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\" (UID: \"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7\") " Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.302559 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" (UID: "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.302954 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-inventory" (OuterVolumeSpecName: "inventory") pod "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" (UID: "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.304904 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" (UID: "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.308629 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-kube-api-access-nmq4f" (OuterVolumeSpecName: "kube-api-access-nmq4f") pod "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" (UID: "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7"). InnerVolumeSpecName "kube-api-access-nmq4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.314118 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" (UID: "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.317003 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" (UID: "f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.377341 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.377487 4660 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.377566 4660 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.377649 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmq4f\" (UniqueName: \"kubernetes.io/projected/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-kube-api-access-nmq4f\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.377724 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.377796 4660 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.732000 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" event={"ID":"f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7","Type":"ContainerDied","Data":"315f635c476590720ec1cf1e5e9d5007f669b3e8a03a1b88413063f356a314a4"} Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.732399 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="315f635c476590720ec1cf1e5e9d5007f669b3e8a03a1b88413063f356a314a4" Nov 29 08:00:22 crc kubenswrapper[4660]: I1129 08:00:22.732023 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.243215 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm"] Nov 29 08:00:23 crc kubenswrapper[4660]: E1129 08:00:23.243589 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.243619 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.243802 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.244380 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.253648 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.254226 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.254463 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.254724 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.263890 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.289559 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm"] Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.299592 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.299691 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.299824 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.299857 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznx4\" (UniqueName: \"kubernetes.io/projected/d0e385e9-5832-4dae-832e-5e155dd48813-kube-api-access-lznx4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.299876 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.402739 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.402792 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznx4\" (UniqueName: \"kubernetes.io/projected/d0e385e9-5832-4dae-832e-5e155dd48813-kube-api-access-lznx4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.402814 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.402834 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.402877 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.411283 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.412100 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.420180 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.425408 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.435315 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznx4\" (UniqueName: \"kubernetes.io/projected/d0e385e9-5832-4dae-832e-5e155dd48813-kube-api-access-lznx4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:23 crc kubenswrapper[4660]: I1129 08:00:23.569110 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:00:24 crc kubenswrapper[4660]: I1129 08:00:24.406752 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm"] Nov 29 08:00:24 crc kubenswrapper[4660]: I1129 08:00:24.754124 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" event={"ID":"d0e385e9-5832-4dae-832e-5e155dd48813","Type":"ContainerStarted","Data":"4ba15733da881e2ba9771c82f3c5deb15b625c14f0abac4b46f2eb8417c7ce40"} Nov 29 08:00:27 crc kubenswrapper[4660]: I1129 08:00:27.814205 4660 generic.go:334] "Generic (PLEG): container finished" podID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerID="7c5b2406a89779b3aedf5bb56ede28bddda955e861a46d4b5f2f61217d68c15c" exitCode=0 Nov 29 08:00:27 crc kubenswrapper[4660]: I1129 08:00:27.814848 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mphvx" event={"ID":"fe2ff44a-d010-4456-827d-fe8ed8c0139d","Type":"ContainerDied","Data":"7c5b2406a89779b3aedf5bb56ede28bddda955e861a46d4b5f2f61217d68c15c"} Nov 29 08:00:27 crc kubenswrapper[4660]: I1129 08:00:27.823339 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" event={"ID":"d0e385e9-5832-4dae-832e-5e155dd48813","Type":"ContainerStarted","Data":"aa4061ba4c6bc52e10ace80de2c6513376ad633f2f5b237e87e7d43a4e01fa9c"} Nov 29 08:00:27 crc kubenswrapper[4660]: I1129 08:00:27.868057 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" podStartSLOduration=2.912662988 podStartE2EDuration="4.868042472s" podCreationTimestamp="2025-11-29 08:00:23 +0000 UTC" firstStartedPulling="2025-11-29 08:00:24.426433453 +0000 UTC m=+2714.979963352" lastFinishedPulling="2025-11-29 08:00:26.381812937 +0000 UTC m=+2716.935342836" observedRunningTime="2025-11-29 08:00:27.861929448 +0000 UTC m=+2718.415459347" watchObservedRunningTime="2025-11-29 08:00:27.868042472 +0000 UTC m=+2718.421572371" Nov 29 08:00:28 crc kubenswrapper[4660]: I1129 08:00:28.838642 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mphvx" event={"ID":"fe2ff44a-d010-4456-827d-fe8ed8c0139d","Type":"ContainerStarted","Data":"3994b31ca01cd0629daf1384c533cb540a3d4b78abb94a0921581d576e800231"} Nov 29 08:00:28 crc kubenswrapper[4660]: I1129 08:00:28.863127 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mphvx" podStartSLOduration=3.173698233 podStartE2EDuration="11.863106485s" podCreationTimestamp="2025-11-29 08:00:17 +0000 UTC" firstStartedPulling="2025-11-29 08:00:19.705900969 +0000 UTC m=+2710.259430868" lastFinishedPulling="2025-11-29 08:00:28.395309221 +0000 UTC m=+2718.948839120" observedRunningTime="2025-11-29 08:00:28.862398116 +0000 UTC m=+2719.415928045" watchObservedRunningTime="2025-11-29 08:00:28.863106485 +0000 UTC m=+2719.416636384" Nov 29 08:00:38 crc kubenswrapper[4660]: I1129 08:00:38.193456 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:38 crc kubenswrapper[4660]: I1129 08:00:38.193773 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:38 crc kubenswrapper[4660]: I1129 08:00:38.244844 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:38 crc kubenswrapper[4660]: I1129 08:00:38.975194 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:39 crc kubenswrapper[4660]: I1129 08:00:39.022460 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mphvx"] Nov 29 08:00:40 crc kubenswrapper[4660]: I1129 08:00:40.948139 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mphvx" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="registry-server" containerID="cri-o://3994b31ca01cd0629daf1384c533cb540a3d4b78abb94a0921581d576e800231" gracePeriod=2 Nov 29 08:00:41 crc kubenswrapper[4660]: I1129 08:00:41.961523 4660 generic.go:334] "Generic (PLEG): container finished" podID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerID="3994b31ca01cd0629daf1384c533cb540a3d4b78abb94a0921581d576e800231" exitCode=0 Nov 29 08:00:41 crc kubenswrapper[4660]: I1129 08:00:41.961751 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mphvx" event={"ID":"fe2ff44a-d010-4456-827d-fe8ed8c0139d","Type":"ContainerDied","Data":"3994b31ca01cd0629daf1384c533cb540a3d4b78abb94a0921581d576e800231"} Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.088470 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.181732 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llcv9\" (UniqueName: \"kubernetes.io/projected/fe2ff44a-d010-4456-827d-fe8ed8c0139d-kube-api-access-llcv9\") pod \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.181807 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-catalog-content\") pod \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.181884 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-utilities\") pod \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\" (UID: \"fe2ff44a-d010-4456-827d-fe8ed8c0139d\") " Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.183332 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-utilities" (OuterVolumeSpecName: "utilities") pod "fe2ff44a-d010-4456-827d-fe8ed8c0139d" (UID: "fe2ff44a-d010-4456-827d-fe8ed8c0139d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.188587 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2ff44a-d010-4456-827d-fe8ed8c0139d-kube-api-access-llcv9" (OuterVolumeSpecName: "kube-api-access-llcv9") pod "fe2ff44a-d010-4456-827d-fe8ed8c0139d" (UID: "fe2ff44a-d010-4456-827d-fe8ed8c0139d"). InnerVolumeSpecName "kube-api-access-llcv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.284755 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.284998 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llcv9\" (UniqueName: \"kubernetes.io/projected/fe2ff44a-d010-4456-827d-fe8ed8c0139d-kube-api-access-llcv9\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.300298 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe2ff44a-d010-4456-827d-fe8ed8c0139d" (UID: "fe2ff44a-d010-4456-827d-fe8ed8c0139d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.386946 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe2ff44a-d010-4456-827d-fe8ed8c0139d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.971743 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mphvx" event={"ID":"fe2ff44a-d010-4456-827d-fe8ed8c0139d","Type":"ContainerDied","Data":"3eb4f053395a7984a8d84be78bbf12be3863de2d6a1d0c6b161801050512e4cd"} Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.972118 4660 scope.go:117] "RemoveContainer" containerID="3994b31ca01cd0629daf1384c533cb540a3d4b78abb94a0921581d576e800231" Nov 29 08:00:42 crc kubenswrapper[4660]: I1129 08:00:42.971880 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mphvx" Nov 29 08:00:43 crc kubenswrapper[4660]: I1129 08:00:43.007508 4660 scope.go:117] "RemoveContainer" containerID="7c5b2406a89779b3aedf5bb56ede28bddda955e861a46d4b5f2f61217d68c15c" Nov 29 08:00:43 crc kubenswrapper[4660]: I1129 08:00:43.012823 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mphvx"] Nov 29 08:00:43 crc kubenswrapper[4660]: I1129 08:00:43.030620 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mphvx"] Nov 29 08:00:43 crc kubenswrapper[4660]: I1129 08:00:43.035788 4660 scope.go:117] "RemoveContainer" containerID="c07c01ec38bd3bdc965b6e024d48823f570d38528f4d5b7b789ef9c61dd9d64a" Nov 29 08:00:43 crc kubenswrapper[4660]: I1129 08:00:43.705856 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" path="/var/lib/kubelet/pods/fe2ff44a-d010-4456-827d-fe8ed8c0139d/volumes" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.151184 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29406721-p2zgx"] Nov 29 08:01:00 crc kubenswrapper[4660]: E1129 08:01:00.153458 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="extract-content" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.153569 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="extract-content" Nov 29 08:01:00 crc kubenswrapper[4660]: E1129 08:01:00.153687 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.153767 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4660]: E1129 08:01:00.153875 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="extract-utilities" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.153956 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="extract-utilities" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.155320 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2ff44a-d010-4456-827d-fe8ed8c0139d" containerName="registry-server" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.156302 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.173696 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-p2zgx"] Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.271766 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgr6\" (UniqueName: \"kubernetes.io/projected/a2ce58ac-319c-47df-b44b-8958659262f8-kube-api-access-phgr6\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.271832 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-config-data\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.272002 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-fernet-keys\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.272026 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-combined-ca-bundle\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.373650 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-fernet-keys\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.373939 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-combined-ca-bundle\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.374070 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phgr6\" (UniqueName: \"kubernetes.io/projected/a2ce58ac-319c-47df-b44b-8958659262f8-kube-api-access-phgr6\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.374183 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-config-data\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.380913 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-combined-ca-bundle\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.381167 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-config-data\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.381691 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-fernet-keys\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.395103 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgr6\" (UniqueName: \"kubernetes.io/projected/a2ce58ac-319c-47df-b44b-8958659262f8-kube-api-access-phgr6\") pod \"keystone-cron-29406721-p2zgx\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.484088 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:00 crc kubenswrapper[4660]: I1129 08:01:00.932867 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406721-p2zgx"] Nov 29 08:01:01 crc kubenswrapper[4660]: I1129 08:01:01.182932 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-p2zgx" event={"ID":"a2ce58ac-319c-47df-b44b-8958659262f8","Type":"ContainerStarted","Data":"702df438a487768b1c1f8f2febf8f653bf2d7f1ef4f1d0f89d41febe89644412"} Nov 29 08:01:01 crc kubenswrapper[4660]: I1129 08:01:01.183298 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-p2zgx" event={"ID":"a2ce58ac-319c-47df-b44b-8958659262f8","Type":"ContainerStarted","Data":"70209f512182b4f533208745b76404c832f638a6f2acff4190a60bed76462fbd"} Nov 29 08:01:01 crc kubenswrapper[4660]: I1129 08:01:01.210681 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29406721-p2zgx" podStartSLOduration=1.210654603 podStartE2EDuration="1.210654603s" podCreationTimestamp="2025-11-29 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:01:01.197736316 +0000 UTC m=+2751.751266235" watchObservedRunningTime="2025-11-29 08:01:01.210654603 +0000 UTC m=+2751.764184512" Nov 29 08:01:05 crc kubenswrapper[4660]: I1129 08:01:05.222540 4660 generic.go:334] "Generic (PLEG): container finished" podID="a2ce58ac-319c-47df-b44b-8958659262f8" containerID="702df438a487768b1c1f8f2febf8f653bf2d7f1ef4f1d0f89d41febe89644412" exitCode=0 Nov 29 08:01:05 crc kubenswrapper[4660]: I1129 08:01:05.222689 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-p2zgx" event={"ID":"a2ce58ac-319c-47df-b44b-8958659262f8","Type":"ContainerDied","Data":"702df438a487768b1c1f8f2febf8f653bf2d7f1ef4f1d0f89d41febe89644412"} Nov 29 08:01:05 crc kubenswrapper[4660]: I1129 08:01:05.500440 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:01:05 crc kubenswrapper[4660]: I1129 08:01:05.500515 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.570496 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.698749 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-fernet-keys\") pod \"a2ce58ac-319c-47df-b44b-8958659262f8\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.699030 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-config-data\") pod \"a2ce58ac-319c-47df-b44b-8958659262f8\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.699132 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgr6\" (UniqueName: \"kubernetes.io/projected/a2ce58ac-319c-47df-b44b-8958659262f8-kube-api-access-phgr6\") pod \"a2ce58ac-319c-47df-b44b-8958659262f8\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.699160 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-combined-ca-bundle\") pod \"a2ce58ac-319c-47df-b44b-8958659262f8\" (UID: \"a2ce58ac-319c-47df-b44b-8958659262f8\") " Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.710834 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a2ce58ac-319c-47df-b44b-8958659262f8" (UID: "a2ce58ac-319c-47df-b44b-8958659262f8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.746824 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ce58ac-319c-47df-b44b-8958659262f8-kube-api-access-phgr6" (OuterVolumeSpecName: "kube-api-access-phgr6") pod "a2ce58ac-319c-47df-b44b-8958659262f8" (UID: "a2ce58ac-319c-47df-b44b-8958659262f8"). InnerVolumeSpecName "kube-api-access-phgr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.802974 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phgr6\" (UniqueName: \"kubernetes.io/projected/a2ce58ac-319c-47df-b44b-8958659262f8-kube-api-access-phgr6\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.803015 4660 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.823960 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2ce58ac-319c-47df-b44b-8958659262f8" (UID: "a2ce58ac-319c-47df-b44b-8958659262f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.905090 4660 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:06 crc kubenswrapper[4660]: I1129 08:01:06.909750 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-config-data" (OuterVolumeSpecName: "config-data") pod "a2ce58ac-319c-47df-b44b-8958659262f8" (UID: "a2ce58ac-319c-47df-b44b-8958659262f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:01:07 crc kubenswrapper[4660]: I1129 08:01:07.006738 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce58ac-319c-47df-b44b-8958659262f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:01:07 crc kubenswrapper[4660]: I1129 08:01:07.240198 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406721-p2zgx" event={"ID":"a2ce58ac-319c-47df-b44b-8958659262f8","Type":"ContainerDied","Data":"70209f512182b4f533208745b76404c832f638a6f2acff4190a60bed76462fbd"} Nov 29 08:01:07 crc kubenswrapper[4660]: I1129 08:01:07.240234 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70209f512182b4f533208745b76404c832f638a6f2acff4190a60bed76462fbd" Nov 29 08:01:07 crc kubenswrapper[4660]: I1129 08:01:07.240260 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406721-p2zgx" Nov 29 08:01:19 crc kubenswrapper[4660]: I1129 08:01:19.990796 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pf7m4" podUID="2ea3483d-b488-4691-b2f6-3bdb54b0ef49" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.49:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 29 08:01:35 crc kubenswrapper[4660]: I1129 08:01:35.501233 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:01:35 crc kubenswrapper[4660]: I1129 08:01:35.501814 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.500907 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.501766 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.501832 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.502794 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"964fa4f27f9fccff7a7ec7611667e4e2fccc5368071272aaffbbf88cdb3c85b0"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.502864 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://964fa4f27f9fccff7a7ec7611667e4e2fccc5368071272aaffbbf88cdb3c85b0" gracePeriod=600 Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.815768 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="964fa4f27f9fccff7a7ec7611667e4e2fccc5368071272aaffbbf88cdb3c85b0" exitCode=0 Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.815800 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"964fa4f27f9fccff7a7ec7611667e4e2fccc5368071272aaffbbf88cdb3c85b0"} Nov 29 08:02:05 crc kubenswrapper[4660]: I1129 08:02:05.816302 4660 scope.go:117] "RemoveContainer" containerID="37d4ad278bc0e764c196ef7649fa89246e0fe2ac980ddf644cfb685c0e3725bf" Nov 29 08:02:06 crc kubenswrapper[4660]: I1129 08:02:06.828520 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf"} Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.219539 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nqr6r"] Nov 29 08:03:42 crc kubenswrapper[4660]: E1129 08:03:42.220635 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ce58ac-319c-47df-b44b-8958659262f8" containerName="keystone-cron" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.220655 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ce58ac-319c-47df-b44b-8958659262f8" containerName="keystone-cron" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.220896 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ce58ac-319c-47df-b44b-8958659262f8" containerName="keystone-cron" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.223057 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.259925 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nqr6r"] Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.297481 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2q6\" (UniqueName: \"kubernetes.io/projected/2f58b902-a7d3-41b2-8172-b56e91d6010d-kube-api-access-hz2q6\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.297558 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f58b902-a7d3-41b2-8172-b56e91d6010d-utilities\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.297640 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f58b902-a7d3-41b2-8172-b56e91d6010d-catalog-content\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.399328 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f58b902-a7d3-41b2-8172-b56e91d6010d-catalog-content\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.399489 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz2q6\" (UniqueName: \"kubernetes.io/projected/2f58b902-a7d3-41b2-8172-b56e91d6010d-kube-api-access-hz2q6\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.399519 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f58b902-a7d3-41b2-8172-b56e91d6010d-utilities\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.399993 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f58b902-a7d3-41b2-8172-b56e91d6010d-utilities\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.400203 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f58b902-a7d3-41b2-8172-b56e91d6010d-catalog-content\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.435779 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz2q6\" (UniqueName: \"kubernetes.io/projected/2f58b902-a7d3-41b2-8172-b56e91d6010d-kube-api-access-hz2q6\") pod \"community-operators-nqr6r\" (UID: \"2f58b902-a7d3-41b2-8172-b56e91d6010d\") " pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:42 crc kubenswrapper[4660]: I1129 08:03:42.544752 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:03:43 crc kubenswrapper[4660]: I1129 08:03:43.029334 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nqr6r"] Nov 29 08:03:43 crc kubenswrapper[4660]: I1129 08:03:43.698683 4660 generic.go:334] "Generic (PLEG): container finished" podID="2f58b902-a7d3-41b2-8172-b56e91d6010d" containerID="c4e51b39421fbe1ecb8f8b309a7d43bb4817d1b9d98b866b67ed4cddaa2ab36a" exitCode=0 Nov 29 08:03:43 crc kubenswrapper[4660]: I1129 08:03:43.700510 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:03:43 crc kubenswrapper[4660]: I1129 08:03:43.705139 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqr6r" event={"ID":"2f58b902-a7d3-41b2-8172-b56e91d6010d","Type":"ContainerDied","Data":"c4e51b39421fbe1ecb8f8b309a7d43bb4817d1b9d98b866b67ed4cddaa2ab36a"} Nov 29 08:03:43 crc kubenswrapper[4660]: I1129 08:03:43.705174 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqr6r" event={"ID":"2f58b902-a7d3-41b2-8172-b56e91d6010d","Type":"ContainerStarted","Data":"bf7d26d4385138f847500e67f33595fe69ef878c152a820af32290c8c1e822a9"} Nov 29 08:03:51 crc kubenswrapper[4660]: I1129 08:03:51.787745 4660 generic.go:334] "Generic (PLEG): container finished" podID="2f58b902-a7d3-41b2-8172-b56e91d6010d" containerID="dc8ab07894d578dd7796f67d43dcf73656ce1fb4f3809d7912b422e5136f8e02" exitCode=0 Nov 29 08:03:51 crc kubenswrapper[4660]: I1129 08:03:51.787976 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqr6r" event={"ID":"2f58b902-a7d3-41b2-8172-b56e91d6010d","Type":"ContainerDied","Data":"dc8ab07894d578dd7796f67d43dcf73656ce1fb4f3809d7912b422e5136f8e02"} Nov 29 08:03:53 crc kubenswrapper[4660]: I1129 08:03:53.809361 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqr6r" event={"ID":"2f58b902-a7d3-41b2-8172-b56e91d6010d","Type":"ContainerStarted","Data":"f6605db5b2d5ba0ef1729d9b228c67baabfed3259d1eee16adc6007f9bea8eb5"} Nov 29 08:03:53 crc kubenswrapper[4660]: I1129 08:03:53.837247 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nqr6r" podStartSLOduration=2.71460643 podStartE2EDuration="11.837228263s" podCreationTimestamp="2025-11-29 08:03:42 +0000 UTC" firstStartedPulling="2025-11-29 08:03:43.700200946 +0000 UTC m=+2914.253730845" lastFinishedPulling="2025-11-29 08:03:52.822822779 +0000 UTC m=+2923.376352678" observedRunningTime="2025-11-29 08:03:53.830206183 +0000 UTC m=+2924.383736092" watchObservedRunningTime="2025-11-29 08:03:53.837228263 +0000 UTC m=+2924.390758162" Nov 29 08:04:02 crc kubenswrapper[4660]: I1129 08:04:02.545930 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:04:02 crc kubenswrapper[4660]: I1129 08:04:02.546493 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:04:02 crc kubenswrapper[4660]: I1129 08:04:02.598362 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:04:02 crc kubenswrapper[4660]: I1129 08:04:02.955961 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nqr6r" Nov 29 08:04:03 crc kubenswrapper[4660]: I1129 08:04:03.784387 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nqr6r"] Nov 29 08:04:03 crc kubenswrapper[4660]: I1129 08:04:03.852806 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8rvfj"] Nov 29 08:04:03 crc kubenswrapper[4660]: I1129 08:04:03.853065 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8rvfj" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="registry-server" containerID="cri-o://789968d8bc16da6606e502f055d17f452d2ab7fa60020c6c7aa28dd0b19aa7be" gracePeriod=2 Nov 29 08:04:05 crc kubenswrapper[4660]: I1129 08:04:05.500220 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:04:05 crc kubenswrapper[4660]: I1129 08:04:05.501551 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:04:05 crc kubenswrapper[4660]: I1129 08:04:05.953801 4660 generic.go:334] "Generic (PLEG): container finished" podID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerID="789968d8bc16da6606e502f055d17f452d2ab7fa60020c6c7aa28dd0b19aa7be" exitCode=0 Nov 29 08:04:05 crc kubenswrapper[4660]: I1129 08:04:05.953904 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvfj" event={"ID":"d65de9ee-1062-4bbe-bfef-1e39897b418f","Type":"ContainerDied","Data":"789968d8bc16da6606e502f055d17f452d2ab7fa60020c6c7aa28dd0b19aa7be"} Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.189837 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.377369 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-catalog-content\") pod \"d65de9ee-1062-4bbe-bfef-1e39897b418f\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.377769 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-452pv\" (UniqueName: \"kubernetes.io/projected/d65de9ee-1062-4bbe-bfef-1e39897b418f-kube-api-access-452pv\") pod \"d65de9ee-1062-4bbe-bfef-1e39897b418f\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.377880 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-utilities\") pod \"d65de9ee-1062-4bbe-bfef-1e39897b418f\" (UID: \"d65de9ee-1062-4bbe-bfef-1e39897b418f\") " Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.378599 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-utilities" (OuterVolumeSpecName: "utilities") pod "d65de9ee-1062-4bbe-bfef-1e39897b418f" (UID: "d65de9ee-1062-4bbe-bfef-1e39897b418f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.385481 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65de9ee-1062-4bbe-bfef-1e39897b418f-kube-api-access-452pv" (OuterVolumeSpecName: "kube-api-access-452pv") pod "d65de9ee-1062-4bbe-bfef-1e39897b418f" (UID: "d65de9ee-1062-4bbe-bfef-1e39897b418f"). InnerVolumeSpecName "kube-api-access-452pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.479777 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-452pv\" (UniqueName: \"kubernetes.io/projected/d65de9ee-1062-4bbe-bfef-1e39897b418f-kube-api-access-452pv\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.479809 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.572586 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d65de9ee-1062-4bbe-bfef-1e39897b418f" (UID: "d65de9ee-1062-4bbe-bfef-1e39897b418f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.580961 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d65de9ee-1062-4bbe-bfef-1e39897b418f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.768941 4660 scope.go:117] "RemoveContainer" containerID="789968d8bc16da6606e502f055d17f452d2ab7fa60020c6c7aa28dd0b19aa7be" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.838194 4660 scope.go:117] "RemoveContainer" containerID="5b6d6a854a1c2d8c1cdde43b5834346b03309edb3c7a797dd30468103d241c95" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.888406 4660 scope.go:117] "RemoveContainer" containerID="d6c4b38436f96ca33a7c6e7f8d4ac50f13f94afbada97ea28791e862c3c99296" Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.970386 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvfj" event={"ID":"d65de9ee-1062-4bbe-bfef-1e39897b418f","Type":"ContainerDied","Data":"fad20ce2f6e1cf11af6c781b3a892678e80576964eae96dcd53c059c1e236a69"} Nov 29 08:04:06 crc kubenswrapper[4660]: I1129 08:04:06.970435 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvfj" Nov 29 08:04:07 crc kubenswrapper[4660]: I1129 08:04:07.009768 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8rvfj"] Nov 29 08:04:07 crc kubenswrapper[4660]: I1129 08:04:07.017464 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8rvfj"] Nov 29 08:04:07 crc kubenswrapper[4660]: I1129 08:04:07.728417 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" path="/var/lib/kubelet/pods/d65de9ee-1062-4bbe-bfef-1e39897b418f/volumes" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.500446 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.501004 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.832470 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6jlkx"] Nov 29 08:04:35 crc kubenswrapper[4660]: E1129 08:04:35.832970 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="extract-utilities" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.832989 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="extract-utilities" Nov 29 08:04:35 crc kubenswrapper[4660]: E1129 08:04:35.833009 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="registry-server" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.833016 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="registry-server" Nov 29 08:04:35 crc kubenswrapper[4660]: E1129 08:04:35.833044 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="extract-content" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.833050 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="extract-content" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.833248 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65de9ee-1062-4bbe-bfef-1e39897b418f" containerName="registry-server" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.834892 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.840846 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jlkx"] Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.861772 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-catalog-content\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.861877 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-utilities\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.861985 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57cw\" (UniqueName: \"kubernetes.io/projected/31c7fe60-4dcb-4d6d-8696-16b60d748f26-kube-api-access-j57cw\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.963024 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-catalog-content\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.963097 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-utilities\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.963148 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57cw\" (UniqueName: \"kubernetes.io/projected/31c7fe60-4dcb-4d6d-8696-16b60d748f26-kube-api-access-j57cw\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.963521 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-catalog-content\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.963928 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-utilities\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:35 crc kubenswrapper[4660]: I1129 08:04:35.991029 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57cw\" (UniqueName: \"kubernetes.io/projected/31c7fe60-4dcb-4d6d-8696-16b60d748f26-kube-api-access-j57cw\") pod \"certified-operators-6jlkx\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:36 crc kubenswrapper[4660]: I1129 08:04:36.167753 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:36 crc kubenswrapper[4660]: I1129 08:04:36.755726 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jlkx"] Nov 29 08:04:37 crc kubenswrapper[4660]: I1129 08:04:37.232023 4660 generic.go:334] "Generic (PLEG): container finished" podID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerID="8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14" exitCode=0 Nov 29 08:04:37 crc kubenswrapper[4660]: I1129 08:04:37.232079 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jlkx" event={"ID":"31c7fe60-4dcb-4d6d-8696-16b60d748f26","Type":"ContainerDied","Data":"8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14"} Nov 29 08:04:37 crc kubenswrapper[4660]: I1129 08:04:37.232379 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jlkx" event={"ID":"31c7fe60-4dcb-4d6d-8696-16b60d748f26","Type":"ContainerStarted","Data":"a6eb629202610b3366658db3a35b8d84cc7b417581fcf62c0d9c8cdfcd87fc1a"} Nov 29 08:04:38 crc kubenswrapper[4660]: I1129 08:04:38.241992 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jlkx" event={"ID":"31c7fe60-4dcb-4d6d-8696-16b60d748f26","Type":"ContainerStarted","Data":"ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990"} Nov 29 08:04:39 crc kubenswrapper[4660]: I1129 08:04:39.254661 4660 generic.go:334] "Generic (PLEG): container finished" podID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerID="ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990" exitCode=0 Nov 29 08:04:39 crc kubenswrapper[4660]: I1129 08:04:39.254705 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jlkx" event={"ID":"31c7fe60-4dcb-4d6d-8696-16b60d748f26","Type":"ContainerDied","Data":"ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990"} Nov 29 08:04:42 crc kubenswrapper[4660]: I1129 08:04:42.287382 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jlkx" event={"ID":"31c7fe60-4dcb-4d6d-8696-16b60d748f26","Type":"ContainerStarted","Data":"1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393"} Nov 29 08:04:42 crc kubenswrapper[4660]: I1129 08:04:42.313668 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6jlkx" podStartSLOduration=3.172660448 podStartE2EDuration="7.313649661s" podCreationTimestamp="2025-11-29 08:04:35 +0000 UTC" firstStartedPulling="2025-11-29 08:04:37.234363431 +0000 UTC m=+2967.787893330" lastFinishedPulling="2025-11-29 08:04:41.375352644 +0000 UTC m=+2971.928882543" observedRunningTime="2025-11-29 08:04:42.311592916 +0000 UTC m=+2972.865122815" watchObservedRunningTime="2025-11-29 08:04:42.313649661 +0000 UTC m=+2972.867179560" Nov 29 08:04:46 crc kubenswrapper[4660]: I1129 08:04:46.168475 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:46 crc kubenswrapper[4660]: I1129 08:04:46.168796 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:46 crc kubenswrapper[4660]: I1129 08:04:46.213503 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:46 crc kubenswrapper[4660]: I1129 08:04:46.361211 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:46 crc kubenswrapper[4660]: I1129 08:04:46.451591 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6jlkx"] Nov 29 08:04:48 crc kubenswrapper[4660]: I1129 08:04:48.343378 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6jlkx" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="registry-server" containerID="cri-o://1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393" gracePeriod=2 Nov 29 08:04:48 crc kubenswrapper[4660]: I1129 08:04:48.906998 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:48 crc kubenswrapper[4660]: I1129 08:04:48.915980 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-catalog-content\") pod \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " Nov 29 08:04:48 crc kubenswrapper[4660]: I1129 08:04:48.916061 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j57cw\" (UniqueName: \"kubernetes.io/projected/31c7fe60-4dcb-4d6d-8696-16b60d748f26-kube-api-access-j57cw\") pod \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " Nov 29 08:04:48 crc kubenswrapper[4660]: I1129 08:04:48.916228 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-utilities\") pod \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\" (UID: \"31c7fe60-4dcb-4d6d-8696-16b60d748f26\") " Nov 29 08:04:48 crc kubenswrapper[4660]: I1129 08:04:48.918155 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-utilities" (OuterVolumeSpecName: "utilities") pod "31c7fe60-4dcb-4d6d-8696-16b60d748f26" (UID: "31c7fe60-4dcb-4d6d-8696-16b60d748f26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:04:48 crc kubenswrapper[4660]: I1129 08:04:48.922802 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c7fe60-4dcb-4d6d-8696-16b60d748f26-kube-api-access-j57cw" (OuterVolumeSpecName: "kube-api-access-j57cw") pod "31c7fe60-4dcb-4d6d-8696-16b60d748f26" (UID: "31c7fe60-4dcb-4d6d-8696-16b60d748f26"). InnerVolumeSpecName "kube-api-access-j57cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.015314 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c7fe60-4dcb-4d6d-8696-16b60d748f26" (UID: "31c7fe60-4dcb-4d6d-8696-16b60d748f26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.018845 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.018889 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j57cw\" (UniqueName: \"kubernetes.io/projected/31c7fe60-4dcb-4d6d-8696-16b60d748f26-kube-api-access-j57cw\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.018905 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c7fe60-4dcb-4d6d-8696-16b60d748f26-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.359551 4660 generic.go:334] "Generic (PLEG): container finished" podID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerID="1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393" exitCode=0 Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.359646 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jlkx" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.359665 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jlkx" event={"ID":"31c7fe60-4dcb-4d6d-8696-16b60d748f26","Type":"ContainerDied","Data":"1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393"} Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.360117 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jlkx" event={"ID":"31c7fe60-4dcb-4d6d-8696-16b60d748f26","Type":"ContainerDied","Data":"a6eb629202610b3366658db3a35b8d84cc7b417581fcf62c0d9c8cdfcd87fc1a"} Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.360149 4660 scope.go:117] "RemoveContainer" containerID="1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.382219 4660 scope.go:117] "RemoveContainer" containerID="ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.409975 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6jlkx"] Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.419942 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6jlkx"] Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.432490 4660 scope.go:117] "RemoveContainer" containerID="8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.495875 4660 scope.go:117] "RemoveContainer" containerID="1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393" Nov 29 08:04:49 crc kubenswrapper[4660]: E1129 08:04:49.497045 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393\": container with ID starting with 1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393 not found: ID does not exist" containerID="1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.497081 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393"} err="failed to get container status \"1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393\": rpc error: code = NotFound desc = could not find container \"1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393\": container with ID starting with 1c296040d48da61c5053b58eae4f0774c520113b031e38ba9599063724076393 not found: ID does not exist" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.497101 4660 scope.go:117] "RemoveContainer" containerID="ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990" Nov 29 08:04:49 crc kubenswrapper[4660]: E1129 08:04:49.497338 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990\": container with ID starting with ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990 not found: ID does not exist" containerID="ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.497363 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990"} err="failed to get container status \"ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990\": rpc error: code = NotFound desc = could not find container \"ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990\": container with ID starting with ab11016453da29abf030251d33968b01b7f09faff26153c691e7f9a304614990 not found: ID does not exist" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.497376 4660 scope.go:117] "RemoveContainer" containerID="8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14" Nov 29 08:04:49 crc kubenswrapper[4660]: E1129 08:04:49.497621 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14\": container with ID starting with 8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14 not found: ID does not exist" containerID="8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.497649 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14"} err="failed to get container status \"8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14\": rpc error: code = NotFound desc = could not find container \"8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14\": container with ID starting with 8a87f0bb8c1de9a17a122b6e0831963896575cb1f1caf291252d0ab3db735f14 not found: ID does not exist" Nov 29 08:04:49 crc kubenswrapper[4660]: I1129 08:04:49.708211 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" path="/var/lib/kubelet/pods/31c7fe60-4dcb-4d6d-8696-16b60d748f26/volumes" Nov 29 08:05:05 crc kubenswrapper[4660]: I1129 08:05:05.500825 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:05:05 crc kubenswrapper[4660]: I1129 08:05:05.501353 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:05:05 crc kubenswrapper[4660]: I1129 08:05:05.501399 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:05:05 crc kubenswrapper[4660]: I1129 08:05:05.502091 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:05:05 crc kubenswrapper[4660]: I1129 08:05:05.502136 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" gracePeriod=600 Nov 29 08:05:06 crc kubenswrapper[4660]: E1129 08:05:06.133296 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:05:06 crc kubenswrapper[4660]: I1129 08:05:06.512675 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf"} Nov 29 08:05:06 crc kubenswrapper[4660]: I1129 08:05:06.512599 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" exitCode=0 Nov 29 08:05:06 crc kubenswrapper[4660]: I1129 08:05:06.512983 4660 scope.go:117] "RemoveContainer" containerID="964fa4f27f9fccff7a7ec7611667e4e2fccc5368071272aaffbbf88cdb3c85b0" Nov 29 08:05:06 crc kubenswrapper[4660]: I1129 08:05:06.513537 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:05:06 crc kubenswrapper[4660]: E1129 08:05:06.513841 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:05:20 crc kubenswrapper[4660]: I1129 08:05:20.693688 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:05:20 crc kubenswrapper[4660]: E1129 08:05:20.694466 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:05:21 crc kubenswrapper[4660]: I1129 08:05:21.637468 4660 generic.go:334] "Generic (PLEG): container finished" podID="d0e385e9-5832-4dae-832e-5e155dd48813" containerID="aa4061ba4c6bc52e10ace80de2c6513376ad633f2f5b237e87e7d43a4e01fa9c" exitCode=0 Nov 29 08:05:21 crc kubenswrapper[4660]: I1129 08:05:21.637522 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" event={"ID":"d0e385e9-5832-4dae-832e-5e155dd48813","Type":"ContainerDied","Data":"aa4061ba4c6bc52e10ace80de2c6513376ad633f2f5b237e87e7d43a4e01fa9c"} Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.220130 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.404588 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lznx4\" (UniqueName: \"kubernetes.io/projected/d0e385e9-5832-4dae-832e-5e155dd48813-kube-api-access-lznx4\") pod \"d0e385e9-5832-4dae-832e-5e155dd48813\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.404709 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-combined-ca-bundle\") pod \"d0e385e9-5832-4dae-832e-5e155dd48813\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.404775 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-secret-0\") pod \"d0e385e9-5832-4dae-832e-5e155dd48813\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.404912 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-inventory\") pod \"d0e385e9-5832-4dae-832e-5e155dd48813\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.404967 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-ssh-key\") pod \"d0e385e9-5832-4dae-832e-5e155dd48813\" (UID: \"d0e385e9-5832-4dae-832e-5e155dd48813\") " Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.427043 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e385e9-5832-4dae-832e-5e155dd48813-kube-api-access-lznx4" (OuterVolumeSpecName: "kube-api-access-lznx4") pod "d0e385e9-5832-4dae-832e-5e155dd48813" (UID: "d0e385e9-5832-4dae-832e-5e155dd48813"). InnerVolumeSpecName "kube-api-access-lznx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.428921 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d0e385e9-5832-4dae-832e-5e155dd48813" (UID: "d0e385e9-5832-4dae-832e-5e155dd48813"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.435001 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-inventory" (OuterVolumeSpecName: "inventory") pod "d0e385e9-5832-4dae-832e-5e155dd48813" (UID: "d0e385e9-5832-4dae-832e-5e155dd48813"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.436842 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d0e385e9-5832-4dae-832e-5e155dd48813" (UID: "d0e385e9-5832-4dae-832e-5e155dd48813"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.442284 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d0e385e9-5832-4dae-832e-5e155dd48813" (UID: "d0e385e9-5832-4dae-832e-5e155dd48813"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.507010 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.507052 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lznx4\" (UniqueName: \"kubernetes.io/projected/d0e385e9-5832-4dae-832e-5e155dd48813-kube-api-access-lznx4\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.507068 4660 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.507081 4660 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.507093 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0e385e9-5832-4dae-832e-5e155dd48813-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.661517 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" event={"ID":"d0e385e9-5832-4dae-832e-5e155dd48813","Type":"ContainerDied","Data":"4ba15733da881e2ba9771c82f3c5deb15b625c14f0abac4b46f2eb8417c7ce40"} Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.661546 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.661561 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba15733da881e2ba9771c82f3c5deb15b625c14f0abac4b46f2eb8417c7ce40" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.774706 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk"] Nov 29 08:05:23 crc kubenswrapper[4660]: E1129 08:05:23.775460 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="extract-utilities" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.775484 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="extract-utilities" Nov 29 08:05:23 crc kubenswrapper[4660]: E1129 08:05:23.775508 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e385e9-5832-4dae-832e-5e155dd48813" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.775518 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e385e9-5832-4dae-832e-5e155dd48813" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 08:05:23 crc kubenswrapper[4660]: E1129 08:05:23.775532 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="registry-server" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.775540 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="registry-server" Nov 29 08:05:23 crc kubenswrapper[4660]: E1129 08:05:23.775578 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="extract-content" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.775587 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="extract-content" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.775854 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c7fe60-4dcb-4d6d-8696-16b60d748f26" containerName="registry-server" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.775876 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e385e9-5832-4dae-832e-5e155dd48813" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.782211 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.784201 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk"] Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.785679 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.785740 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.786840 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.787741 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.787807 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.787975 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.788137 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915293 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915371 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915390 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915467 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915510 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvhm\" (UniqueName: \"kubernetes.io/projected/f4ebec6a-7674-4948-94b8-51d4f1e6de90-kube-api-access-rjvhm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915627 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915752 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915831 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:23 crc kubenswrapper[4660]: I1129 08:05:23.915920 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018093 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018171 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018219 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018284 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018361 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018395 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018433 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018453 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvhm\" (UniqueName: \"kubernetes.io/projected/f4ebec6a-7674-4948-94b8-51d4f1e6de90-kube-api-access-rjvhm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.018497 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.020264 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.023188 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.023305 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.023448 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.023481 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.023755 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.026252 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.026704 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.042678 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvhm\" (UniqueName: \"kubernetes.io/projected/f4ebec6a-7674-4948-94b8-51d4f1e6de90-kube-api-access-rjvhm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-flbrk\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.099790 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.623826 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk"] Nov 29 08:05:24 crc kubenswrapper[4660]: I1129 08:05:24.674924 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" event={"ID":"f4ebec6a-7674-4948-94b8-51d4f1e6de90","Type":"ContainerStarted","Data":"d821ffc24c2534844a1b19fe31f81aa0abba5fa6760053bc99dd53a89f33e27a"} Nov 29 08:05:25 crc kubenswrapper[4660]: I1129 08:05:25.687810 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" event={"ID":"f4ebec6a-7674-4948-94b8-51d4f1e6de90","Type":"ContainerStarted","Data":"bd2cda7b11c630e43de7d0b7c2ecebbc8717cc7c2824cf7327b18104d9a93804"} Nov 29 08:05:33 crc kubenswrapper[4660]: I1129 08:05:33.694220 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:05:33 crc kubenswrapper[4660]: E1129 08:05:33.694885 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:05:47 crc kubenswrapper[4660]: I1129 08:05:47.698256 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:05:47 crc kubenswrapper[4660]: E1129 08:05:47.698986 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:05:58 crc kubenswrapper[4660]: I1129 08:05:58.694520 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:05:58 crc kubenswrapper[4660]: E1129 08:05:58.695980 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:06:11 crc kubenswrapper[4660]: I1129 08:06:11.694630 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:06:11 crc kubenswrapper[4660]: E1129 08:06:11.695359 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:06:23 crc kubenswrapper[4660]: I1129 08:06:23.694352 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:06:23 crc kubenswrapper[4660]: E1129 08:06:23.695120 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:06:37 crc kubenswrapper[4660]: I1129 08:06:37.694064 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:06:37 crc kubenswrapper[4660]: E1129 08:06:37.694967 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:06:41 crc kubenswrapper[4660]: I1129 08:06:41.973972 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" podStartSLOduration=78.345500002 podStartE2EDuration="1m18.973953629s" podCreationTimestamp="2025-11-29 08:05:23 +0000 UTC" firstStartedPulling="2025-11-29 08:05:24.63144947 +0000 UTC m=+3015.184979369" lastFinishedPulling="2025-11-29 08:05:25.259903097 +0000 UTC m=+3015.813432996" observedRunningTime="2025-11-29 08:05:25.715945837 +0000 UTC m=+3016.269475746" watchObservedRunningTime="2025-11-29 08:06:41.973953629 +0000 UTC m=+3092.527483528" Nov 29 08:06:41 crc kubenswrapper[4660]: I1129 08:06:41.979222 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wz49w"] Nov 29 08:06:41 crc kubenswrapper[4660]: I1129 08:06:41.980960 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:41 crc kubenswrapper[4660]: I1129 08:06:41.994343 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz49w"] Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.126519 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzvx\" (UniqueName: \"kubernetes.io/projected/2aca835c-7528-421b-8291-4384b0e31d2b-kube-api-access-7kzvx\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.126608 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-catalog-content\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.126694 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-utilities\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.228645 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-utilities\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.228778 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzvx\" (UniqueName: \"kubernetes.io/projected/2aca835c-7528-421b-8291-4384b0e31d2b-kube-api-access-7kzvx\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.228832 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-catalog-content\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.229132 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-utilities\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.229172 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-catalog-content\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.248161 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzvx\" (UniqueName: \"kubernetes.io/projected/2aca835c-7528-421b-8291-4384b0e31d2b-kube-api-access-7kzvx\") pod \"redhat-marketplace-wz49w\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.300782 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:42 crc kubenswrapper[4660]: I1129 08:06:42.915462 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz49w"] Nov 29 08:06:43 crc kubenswrapper[4660]: I1129 08:06:43.369001 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz49w" event={"ID":"2aca835c-7528-421b-8291-4384b0e31d2b","Type":"ContainerStarted","Data":"71c0595890258560aa4e4e02f8284e30cbcef1fd00ad04189e39980ebdfa8cb0"} Nov 29 08:06:44 crc kubenswrapper[4660]: I1129 08:06:44.382689 4660 generic.go:334] "Generic (PLEG): container finished" podID="2aca835c-7528-421b-8291-4384b0e31d2b" containerID="268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd" exitCode=0 Nov 29 08:06:44 crc kubenswrapper[4660]: I1129 08:06:44.383005 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz49w" event={"ID":"2aca835c-7528-421b-8291-4384b0e31d2b","Type":"ContainerDied","Data":"268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd"} Nov 29 08:06:46 crc kubenswrapper[4660]: I1129 08:06:46.403682 4660 generic.go:334] "Generic (PLEG): container finished" podID="2aca835c-7528-421b-8291-4384b0e31d2b" containerID="a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4" exitCode=0 Nov 29 08:06:46 crc kubenswrapper[4660]: I1129 08:06:46.403744 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz49w" event={"ID":"2aca835c-7528-421b-8291-4384b0e31d2b","Type":"ContainerDied","Data":"a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4"} Nov 29 08:06:47 crc kubenswrapper[4660]: I1129 08:06:47.423325 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz49w" event={"ID":"2aca835c-7528-421b-8291-4384b0e31d2b","Type":"ContainerStarted","Data":"7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3"} Nov 29 08:06:47 crc kubenswrapper[4660]: I1129 08:06:47.451094 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wz49w" podStartSLOduration=4.000183778 podStartE2EDuration="6.451071325s" podCreationTimestamp="2025-11-29 08:06:41 +0000 UTC" firstStartedPulling="2025-11-29 08:06:44.38537438 +0000 UTC m=+3094.938904279" lastFinishedPulling="2025-11-29 08:06:46.836261927 +0000 UTC m=+3097.389791826" observedRunningTime="2025-11-29 08:06:47.44569857 +0000 UTC m=+3097.999228489" watchObservedRunningTime="2025-11-29 08:06:47.451071325 +0000 UTC m=+3098.004601224" Nov 29 08:06:50 crc kubenswrapper[4660]: I1129 08:06:50.694249 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:06:50 crc kubenswrapper[4660]: E1129 08:06:50.694865 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:06:52 crc kubenswrapper[4660]: I1129 08:06:52.301643 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:52 crc kubenswrapper[4660]: I1129 08:06:52.303802 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:52 crc kubenswrapper[4660]: I1129 08:06:52.365828 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:52 crc kubenswrapper[4660]: I1129 08:06:52.520522 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:52 crc kubenswrapper[4660]: I1129 08:06:52.607804 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz49w"] Nov 29 08:06:54 crc kubenswrapper[4660]: I1129 08:06:54.483984 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wz49w" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="registry-server" containerID="cri-o://7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3" gracePeriod=2 Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.018244 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.175025 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-utilities\") pod \"2aca835c-7528-421b-8291-4384b0e31d2b\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.175294 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzvx\" (UniqueName: \"kubernetes.io/projected/2aca835c-7528-421b-8291-4384b0e31d2b-kube-api-access-7kzvx\") pod \"2aca835c-7528-421b-8291-4384b0e31d2b\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.175563 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-catalog-content\") pod \"2aca835c-7528-421b-8291-4384b0e31d2b\" (UID: \"2aca835c-7528-421b-8291-4384b0e31d2b\") " Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.175768 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-utilities" (OuterVolumeSpecName: "utilities") pod "2aca835c-7528-421b-8291-4384b0e31d2b" (UID: "2aca835c-7528-421b-8291-4384b0e31d2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.176170 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.184794 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aca835c-7528-421b-8291-4384b0e31d2b-kube-api-access-7kzvx" (OuterVolumeSpecName: "kube-api-access-7kzvx") pod "2aca835c-7528-421b-8291-4384b0e31d2b" (UID: "2aca835c-7528-421b-8291-4384b0e31d2b"). InnerVolumeSpecName "kube-api-access-7kzvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.202785 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aca835c-7528-421b-8291-4384b0e31d2b" (UID: "2aca835c-7528-421b-8291-4384b0e31d2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.277752 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aca835c-7528-421b-8291-4384b0e31d2b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.277800 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzvx\" (UniqueName: \"kubernetes.io/projected/2aca835c-7528-421b-8291-4384b0e31d2b-kube-api-access-7kzvx\") on node \"crc\" DevicePath \"\"" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.495756 4660 generic.go:334] "Generic (PLEG): container finished" podID="2aca835c-7528-421b-8291-4384b0e31d2b" containerID="7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3" exitCode=0 Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.495807 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz49w" event={"ID":"2aca835c-7528-421b-8291-4384b0e31d2b","Type":"ContainerDied","Data":"7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3"} Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.496732 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz49w" event={"ID":"2aca835c-7528-421b-8291-4384b0e31d2b","Type":"ContainerDied","Data":"71c0595890258560aa4e4e02f8284e30cbcef1fd00ad04189e39980ebdfa8cb0"} Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.495894 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz49w" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.496759 4660 scope.go:117] "RemoveContainer" containerID="7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.515014 4660 scope.go:117] "RemoveContainer" containerID="a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.538511 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz49w"] Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.551534 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz49w"] Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.564063 4660 scope.go:117] "RemoveContainer" containerID="268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.592553 4660 scope.go:117] "RemoveContainer" containerID="7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3" Nov 29 08:06:55 crc kubenswrapper[4660]: E1129 08:06:55.594169 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3\": container with ID starting with 7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3 not found: ID does not exist" containerID="7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.594198 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3"} err="failed to get container status \"7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3\": rpc error: code = NotFound desc = could not find container \"7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3\": container with ID starting with 7cc2b08798b0c10a3250568b74ec6a4e45c32a602b9d04ae2e6234670f2150a3 not found: ID does not exist" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.594223 4660 scope.go:117] "RemoveContainer" containerID="a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4" Nov 29 08:06:55 crc kubenswrapper[4660]: E1129 08:06:55.594633 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4\": container with ID starting with a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4 not found: ID does not exist" containerID="a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.594659 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4"} err="failed to get container status \"a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4\": rpc error: code = NotFound desc = could not find container \"a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4\": container with ID starting with a9bb008c64e5acb592e840eb42cf0c274f98f7306d5f47c3efc26ab9881351e4 not found: ID does not exist" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.594678 4660 scope.go:117] "RemoveContainer" containerID="268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd" Nov 29 08:06:55 crc kubenswrapper[4660]: E1129 08:06:55.594855 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd\": container with ID starting with 268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd not found: ID does not exist" containerID="268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.594877 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd"} err="failed to get container status \"268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd\": rpc error: code = NotFound desc = could not find container \"268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd\": container with ID starting with 268f6de4f5784ea37cec977420958a343c6277083e2c493e2711dd0d485621bd not found: ID does not exist" Nov 29 08:06:55 crc kubenswrapper[4660]: I1129 08:06:55.704184 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" path="/var/lib/kubelet/pods/2aca835c-7528-421b-8291-4384b0e31d2b/volumes" Nov 29 08:07:01 crc kubenswrapper[4660]: I1129 08:07:01.694015 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:07:01 crc kubenswrapper[4660]: E1129 08:07:01.694826 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:07:13 crc kubenswrapper[4660]: I1129 08:07:13.750017 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-75ddc44955-xj8mn" podUID="27a79873-e3bd-4172-b5c3-17a981a9a091" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 29 08:07:16 crc kubenswrapper[4660]: I1129 08:07:16.693854 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:07:16 crc kubenswrapper[4660]: E1129 08:07:16.695024 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:07:31 crc kubenswrapper[4660]: I1129 08:07:31.694168 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:07:31 crc kubenswrapper[4660]: E1129 08:07:31.696914 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:07:45 crc kubenswrapper[4660]: I1129 08:07:45.707388 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:07:45 crc kubenswrapper[4660]: E1129 08:07:45.712835 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:07:56 crc kubenswrapper[4660]: I1129 08:07:56.693986 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:07:56 crc kubenswrapper[4660]: E1129 08:07:56.694706 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:08:09 crc kubenswrapper[4660]: I1129 08:08:09.703181 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:08:09 crc kubenswrapper[4660]: E1129 08:08:09.704189 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:08:21 crc kubenswrapper[4660]: I1129 08:08:21.695239 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:08:21 crc kubenswrapper[4660]: E1129 08:08:21.696090 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:08:32 crc kubenswrapper[4660]: I1129 08:08:32.383684 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4ebec6a-7674-4948-94b8-51d4f1e6de90" containerID="bd2cda7b11c630e43de7d0b7c2ecebbc8717cc7c2824cf7327b18104d9a93804" exitCode=0 Nov 29 08:08:32 crc kubenswrapper[4660]: I1129 08:08:32.383800 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" event={"ID":"f4ebec6a-7674-4948-94b8-51d4f1e6de90","Type":"ContainerDied","Data":"bd2cda7b11c630e43de7d0b7c2ecebbc8717cc7c2824cf7327b18104d9a93804"} Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.831143 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.931844 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-extra-config-0\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.931967 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-1\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.932025 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjvhm\" (UniqueName: \"kubernetes.io/projected/f4ebec6a-7674-4948-94b8-51d4f1e6de90-kube-api-access-rjvhm\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.932115 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-0\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.932151 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-combined-ca-bundle\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.932194 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-0\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.932244 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-inventory\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.932345 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-1\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.932379 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-ssh-key\") pod \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\" (UID: \"f4ebec6a-7674-4948-94b8-51d4f1e6de90\") " Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.938563 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ebec6a-7674-4948-94b8-51d4f1e6de90-kube-api-access-rjvhm" (OuterVolumeSpecName: "kube-api-access-rjvhm") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "kube-api-access-rjvhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.963558 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.975729 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-inventory" (OuterVolumeSpecName: "inventory") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:33 crc kubenswrapper[4660]: I1129 08:08:33.978782 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.000218 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.002844 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.009694 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.017420 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.022868 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f4ebec6a-7674-4948-94b8-51d4f1e6de90" (UID: "f4ebec6a-7674-4948-94b8-51d4f1e6de90"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035403 4660 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035439 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjvhm\" (UniqueName: \"kubernetes.io/projected/f4ebec6a-7674-4948-94b8-51d4f1e6de90-kube-api-access-rjvhm\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035448 4660 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035456 4660 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035467 4660 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035475 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035485 4660 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035495 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ebec6a-7674-4948-94b8-51d4f1e6de90-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.035503 4660 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4ebec6a-7674-4948-94b8-51d4f1e6de90-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.413067 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" event={"ID":"f4ebec6a-7674-4948-94b8-51d4f1e6de90","Type":"ContainerDied","Data":"d821ffc24c2534844a1b19fe31f81aa0abba5fa6760053bc99dd53a89f33e27a"} Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.413305 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d821ffc24c2534844a1b19fe31f81aa0abba5fa6760053bc99dd53a89f33e27a" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.413162 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-flbrk" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.511576 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc"] Nov 29 08:08:34 crc kubenswrapper[4660]: E1129 08:08:34.512023 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="registry-server" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.512044 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="registry-server" Nov 29 08:08:34 crc kubenswrapper[4660]: E1129 08:08:34.512067 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ebec6a-7674-4948-94b8-51d4f1e6de90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.512075 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ebec6a-7674-4948-94b8-51d4f1e6de90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 08:08:34 crc kubenswrapper[4660]: E1129 08:08:34.512099 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="extract-utilities" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.512110 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="extract-utilities" Nov 29 08:08:34 crc kubenswrapper[4660]: E1129 08:08:34.512130 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="extract-content" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.512138 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="extract-content" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.512359 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ebec6a-7674-4948-94b8-51d4f1e6de90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.512398 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aca835c-7528-421b-8291-4384b0e31d2b" containerName="registry-server" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.515024 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.518981 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.519552 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.525103 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.525343 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.525503 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hf4sz" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.536528 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc"] Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.651116 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.651181 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.651208 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.651233 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdkn\" (UniqueName: \"kubernetes.io/projected/fddda6dc-cca7-41a8-8be3-1e6647af2356-kube-api-access-nwdkn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.651261 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.651291 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.651399 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.693384 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:08:34 crc kubenswrapper[4660]: E1129 08:08:34.693602 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.752786 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.752847 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.752903 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.752933 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.752955 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwdkn\" (UniqueName: \"kubernetes.io/projected/fddda6dc-cca7-41a8-8be3-1e6647af2356-kube-api-access-nwdkn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.753006 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.753031 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.757455 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.758499 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.761014 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.762149 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.762309 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.762634 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.774341 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwdkn\" (UniqueName: \"kubernetes.io/projected/fddda6dc-cca7-41a8-8be3-1e6647af2356-kube-api-access-nwdkn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:34 crc kubenswrapper[4660]: I1129 08:08:34.837531 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:08:35 crc kubenswrapper[4660]: I1129 08:08:35.445067 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc"] Nov 29 08:08:36 crc kubenswrapper[4660]: I1129 08:08:36.430047 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" event={"ID":"fddda6dc-cca7-41a8-8be3-1e6647af2356","Type":"ContainerStarted","Data":"9eb13688b50028ce8e126ed11cde0cebac3521bcfeb5eb6a06abf06b55dec1db"} Nov 29 08:08:49 crc kubenswrapper[4660]: I1129 08:08:49.701922 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:08:49 crc kubenswrapper[4660]: E1129 08:08:49.702597 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:09:04 crc kubenswrapper[4660]: I1129 08:09:04.693306 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:09:04 crc kubenswrapper[4660]: E1129 08:09:04.693923 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:09:05 crc kubenswrapper[4660]: I1129 08:09:05.683399 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" event={"ID":"fddda6dc-cca7-41a8-8be3-1e6647af2356","Type":"ContainerStarted","Data":"c880c0aea7cc09c991482679559ca6113a6398568536ca5ff8962529ee7d1ef1"} Nov 29 08:09:05 crc kubenswrapper[4660]: I1129 08:09:05.707597 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" podStartSLOduration=2.03772516 podStartE2EDuration="31.707580575s" podCreationTimestamp="2025-11-29 08:08:34 +0000 UTC" firstStartedPulling="2025-11-29 08:08:35.443083906 +0000 UTC m=+3205.996613805" lastFinishedPulling="2025-11-29 08:09:05.112939301 +0000 UTC m=+3235.666469220" observedRunningTime="2025-11-29 08:09:05.706731273 +0000 UTC m=+3236.260261182" watchObservedRunningTime="2025-11-29 08:09:05.707580575 +0000 UTC m=+3236.261110474" Nov 29 08:09:18 crc kubenswrapper[4660]: I1129 08:09:18.693554 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:09:18 crc kubenswrapper[4660]: E1129 08:09:18.695688 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:09:29 crc kubenswrapper[4660]: I1129 08:09:29.701903 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:09:29 crc kubenswrapper[4660]: E1129 08:09:29.702580 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:09:41 crc kubenswrapper[4660]: I1129 08:09:41.693502 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:09:41 crc kubenswrapper[4660]: E1129 08:09:41.694292 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:09:45 crc kubenswrapper[4660]: E1129 08:09:45.343118 4660 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.65s" Nov 29 08:09:52 crc kubenswrapper[4660]: I1129 08:09:52.693527 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:09:52 crc kubenswrapper[4660]: E1129 08:09:52.694367 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:10:07 crc kubenswrapper[4660]: I1129 08:10:07.693281 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:10:08 crc kubenswrapper[4660]: I1129 08:10:08.533530 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"badaf5dec387a67befbdd3691bf902d3d59081d42814af3f527ad4ccb5e03a27"} Nov 29 08:10:13 crc kubenswrapper[4660]: I1129 08:10:13.747775 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-75ddc44955-xj8mn" podUID="27a79873-e3bd-4172-b5c3-17a981a9a091" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.401215 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4mrdr"] Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.412032 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.474381 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4mrdr"] Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.537508 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsgfq\" (UniqueName: \"kubernetes.io/projected/165e4678-d200-401c-8764-41ba2aff9963-kube-api-access-tsgfq\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.537829 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-catalog-content\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.537944 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-utilities\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.639722 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsgfq\" (UniqueName: \"kubernetes.io/projected/165e4678-d200-401c-8764-41ba2aff9963-kube-api-access-tsgfq\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.639768 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-catalog-content\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.639803 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-utilities\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.640251 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-catalog-content\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.640263 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-utilities\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.662191 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsgfq\" (UniqueName: \"kubernetes.io/projected/165e4678-d200-401c-8764-41ba2aff9963-kube-api-access-tsgfq\") pod \"redhat-operators-4mrdr\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:24 crc kubenswrapper[4660]: I1129 08:10:24.757149 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:10:25 crc kubenswrapper[4660]: I1129 08:10:25.301289 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4mrdr"] Nov 29 08:10:25 crc kubenswrapper[4660]: I1129 08:10:25.688523 4660 generic.go:334] "Generic (PLEG): container finished" podID="165e4678-d200-401c-8764-41ba2aff9963" containerID="36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3" exitCode=0 Nov 29 08:10:25 crc kubenswrapper[4660]: I1129 08:10:25.688565 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mrdr" event={"ID":"165e4678-d200-401c-8764-41ba2aff9963","Type":"ContainerDied","Data":"36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3"} Nov 29 08:10:25 crc kubenswrapper[4660]: I1129 08:10:25.688590 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mrdr" event={"ID":"165e4678-d200-401c-8764-41ba2aff9963","Type":"ContainerStarted","Data":"e7b1ee5952af397f4deb7d1a4e1f1f237b337e66646b0e33ba20fc696d5a4ff8"} Nov 29 08:10:25 crc kubenswrapper[4660]: I1129 08:10:25.692000 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:10:56 crc kubenswrapper[4660]: E1129 08:10:56.413917 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: pinging container registry registry.redhat.io: Get \"https://registry.redhat.io/v2/\": net/http: TLS handshake timeout" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 29 08:10:56 crc kubenswrapper[4660]: E1129 08:10:56.414474 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsgfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4mrdr_openshift-marketplace(165e4678-d200-401c-8764-41ba2aff9963): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: pinging container registry registry.redhat.io: Get \"https://registry.redhat.io/v2/\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 29 08:10:56 crc kubenswrapper[4660]: E1129 08:10:56.415643 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: pinging container registry registry.redhat.io: Get \\\"https://registry.redhat.io/v2/\\\": net/http: TLS handshake timeout\"" pod="openshift-marketplace/redhat-operators-4mrdr" podUID="165e4678-d200-401c-8764-41ba2aff9963" Nov 29 08:10:57 crc kubenswrapper[4660]: E1129 08:10:57.002982 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4mrdr" podUID="165e4678-d200-401c-8764-41ba2aff9963" Nov 29 08:11:10 crc kubenswrapper[4660]: I1129 08:11:10.124385 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mrdr" event={"ID":"165e4678-d200-401c-8764-41ba2aff9963","Type":"ContainerStarted","Data":"bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634"} Nov 29 08:11:15 crc kubenswrapper[4660]: I1129 08:11:15.168543 4660 generic.go:334] "Generic (PLEG): container finished" podID="165e4678-d200-401c-8764-41ba2aff9963" containerID="bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634" exitCode=0 Nov 29 08:11:15 crc kubenswrapper[4660]: I1129 08:11:15.168657 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mrdr" event={"ID":"165e4678-d200-401c-8764-41ba2aff9963","Type":"ContainerDied","Data":"bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634"} Nov 29 08:11:16 crc kubenswrapper[4660]: I1129 08:11:16.179130 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mrdr" event={"ID":"165e4678-d200-401c-8764-41ba2aff9963","Type":"ContainerStarted","Data":"65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8"} Nov 29 08:11:16 crc kubenswrapper[4660]: I1129 08:11:16.206740 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4mrdr" podStartSLOduration=2.258821075 podStartE2EDuration="52.20671733s" podCreationTimestamp="2025-11-29 08:10:24 +0000 UTC" firstStartedPulling="2025-11-29 08:10:25.691740327 +0000 UTC m=+3316.245270226" lastFinishedPulling="2025-11-29 08:11:15.639636582 +0000 UTC m=+3366.193166481" observedRunningTime="2025-11-29 08:11:16.195352204 +0000 UTC m=+3366.748882103" watchObservedRunningTime="2025-11-29 08:11:16.20671733 +0000 UTC m=+3366.760247239" Nov 29 08:11:24 crc kubenswrapper[4660]: I1129 08:11:24.758109 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:11:24 crc kubenswrapper[4660]: I1129 08:11:24.760656 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:11:24 crc kubenswrapper[4660]: I1129 08:11:24.819201 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:11:25 crc kubenswrapper[4660]: I1129 08:11:25.315654 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:11:25 crc kubenswrapper[4660]: I1129 08:11:25.601488 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4mrdr"] Nov 29 08:11:27 crc kubenswrapper[4660]: I1129 08:11:27.275118 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4mrdr" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="registry-server" containerID="cri-o://65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8" gracePeriod=2 Nov 29 08:11:27 crc kubenswrapper[4660]: I1129 08:11:27.733010 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:11:27 crc kubenswrapper[4660]: I1129 08:11:27.976983 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-utilities\") pod \"165e4678-d200-401c-8764-41ba2aff9963\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " Nov 29 08:11:27 crc kubenswrapper[4660]: I1129 08:11:27.977189 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-catalog-content\") pod \"165e4678-d200-401c-8764-41ba2aff9963\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " Nov 29 08:11:27 crc kubenswrapper[4660]: I1129 08:11:27.977267 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsgfq\" (UniqueName: \"kubernetes.io/projected/165e4678-d200-401c-8764-41ba2aff9963-kube-api-access-tsgfq\") pod \"165e4678-d200-401c-8764-41ba2aff9963\" (UID: \"165e4678-d200-401c-8764-41ba2aff9963\") " Nov 29 08:11:27 crc kubenswrapper[4660]: I1129 08:11:27.977942 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-utilities" (OuterVolumeSpecName: "utilities") pod "165e4678-d200-401c-8764-41ba2aff9963" (UID: "165e4678-d200-401c-8764-41ba2aff9963"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:27 crc kubenswrapper[4660]: I1129 08:11:27.984481 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165e4678-d200-401c-8764-41ba2aff9963-kube-api-access-tsgfq" (OuterVolumeSpecName: "kube-api-access-tsgfq") pod "165e4678-d200-401c-8764-41ba2aff9963" (UID: "165e4678-d200-401c-8764-41ba2aff9963"). InnerVolumeSpecName "kube-api-access-tsgfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.079504 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.079545 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsgfq\" (UniqueName: \"kubernetes.io/projected/165e4678-d200-401c-8764-41ba2aff9963-kube-api-access-tsgfq\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.106728 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "165e4678-d200-401c-8764-41ba2aff9963" (UID: "165e4678-d200-401c-8764-41ba2aff9963"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.181089 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165e4678-d200-401c-8764-41ba2aff9963-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.286200 4660 generic.go:334] "Generic (PLEG): container finished" podID="165e4678-d200-401c-8764-41ba2aff9963" containerID="65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8" exitCode=0 Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.286260 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mrdr" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.286266 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mrdr" event={"ID":"165e4678-d200-401c-8764-41ba2aff9963","Type":"ContainerDied","Data":"65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8"} Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.286349 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mrdr" event={"ID":"165e4678-d200-401c-8764-41ba2aff9963","Type":"ContainerDied","Data":"e7b1ee5952af397f4deb7d1a4e1f1f237b337e66646b0e33ba20fc696d5a4ff8"} Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.286388 4660 scope.go:117] "RemoveContainer" containerID="65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.313367 4660 scope.go:117] "RemoveContainer" containerID="bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.326703 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4mrdr"] Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.352149 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4mrdr"] Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.361914 4660 scope.go:117] "RemoveContainer" containerID="36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.387744 4660 scope.go:117] "RemoveContainer" containerID="65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8" Nov 29 08:11:28 crc kubenswrapper[4660]: E1129 08:11:28.389650 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8\": container with ID starting with 65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8 not found: ID does not exist" containerID="65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.389715 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8"} err="failed to get container status \"65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8\": rpc error: code = NotFound desc = could not find container \"65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8\": container with ID starting with 65e0cef8e03e43bb3c71b22a3aef7d0a280ccb77cad9a505960cf18bc9c453d8 not found: ID does not exist" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.389748 4660 scope.go:117] "RemoveContainer" containerID="bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634" Nov 29 08:11:28 crc kubenswrapper[4660]: E1129 08:11:28.390319 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634\": container with ID starting with bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634 not found: ID does not exist" containerID="bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.390372 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634"} err="failed to get container status \"bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634\": rpc error: code = NotFound desc = could not find container \"bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634\": container with ID starting with bde2fa0d898f904f4bfac406ef927b4446c3e3425d9d7aed31c3f907eea9c634 not found: ID does not exist" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.390581 4660 scope.go:117] "RemoveContainer" containerID="36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3" Nov 29 08:11:28 crc kubenswrapper[4660]: E1129 08:11:28.392851 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3\": container with ID starting with 36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3 not found: ID does not exist" containerID="36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3" Nov 29 08:11:28 crc kubenswrapper[4660]: I1129 08:11:28.392906 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3"} err="failed to get container status \"36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3\": rpc error: code = NotFound desc = could not find container \"36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3\": container with ID starting with 36edb9e5cf23355a4b9306d954ffa9bf56871423bc061e5e340b9a3c5c6c03e3 not found: ID does not exist" Nov 29 08:11:29 crc kubenswrapper[4660]: I1129 08:11:29.708915 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165e4678-d200-401c-8764-41ba2aff9963" path="/var/lib/kubelet/pods/165e4678-d200-401c-8764-41ba2aff9963/volumes" Nov 29 08:12:34 crc kubenswrapper[4660]: I1129 08:12:34.879788 4660 generic.go:334] "Generic (PLEG): container finished" podID="fddda6dc-cca7-41a8-8be3-1e6647af2356" containerID="c880c0aea7cc09c991482679559ca6113a6398568536ca5ff8962529ee7d1ef1" exitCode=0 Nov 29 08:12:34 crc kubenswrapper[4660]: I1129 08:12:34.879857 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" event={"ID":"fddda6dc-cca7-41a8-8be3-1e6647af2356","Type":"ContainerDied","Data":"c880c0aea7cc09c991482679559ca6113a6398568536ca5ff8962529ee7d1ef1"} Nov 29 08:12:35 crc kubenswrapper[4660]: I1129 08:12:35.500350 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:12:35 crc kubenswrapper[4660]: I1129 08:12:35.500417 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.332788 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.492893 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwdkn\" (UniqueName: \"kubernetes.io/projected/fddda6dc-cca7-41a8-8be3-1e6647af2356-kube-api-access-nwdkn\") pod \"fddda6dc-cca7-41a8-8be3-1e6647af2356\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.492979 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-0\") pod \"fddda6dc-cca7-41a8-8be3-1e6647af2356\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.493013 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-telemetry-combined-ca-bundle\") pod \"fddda6dc-cca7-41a8-8be3-1e6647af2356\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.493156 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-1\") pod \"fddda6dc-cca7-41a8-8be3-1e6647af2356\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.493251 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-2\") pod \"fddda6dc-cca7-41a8-8be3-1e6647af2356\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.493818 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ssh-key\") pod \"fddda6dc-cca7-41a8-8be3-1e6647af2356\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.493855 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-inventory\") pod \"fddda6dc-cca7-41a8-8be3-1e6647af2356\" (UID: \"fddda6dc-cca7-41a8-8be3-1e6647af2356\") " Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.499860 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fddda6dc-cca7-41a8-8be3-1e6647af2356" (UID: "fddda6dc-cca7-41a8-8be3-1e6647af2356"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.499918 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddda6dc-cca7-41a8-8be3-1e6647af2356-kube-api-access-nwdkn" (OuterVolumeSpecName: "kube-api-access-nwdkn") pod "fddda6dc-cca7-41a8-8be3-1e6647af2356" (UID: "fddda6dc-cca7-41a8-8be3-1e6647af2356"). InnerVolumeSpecName "kube-api-access-nwdkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.525428 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fddda6dc-cca7-41a8-8be3-1e6647af2356" (UID: "fddda6dc-cca7-41a8-8be3-1e6647af2356"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.525784 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fddda6dc-cca7-41a8-8be3-1e6647af2356" (UID: "fddda6dc-cca7-41a8-8be3-1e6647af2356"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.528636 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fddda6dc-cca7-41a8-8be3-1e6647af2356" (UID: "fddda6dc-cca7-41a8-8be3-1e6647af2356"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.528841 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fddda6dc-cca7-41a8-8be3-1e6647af2356" (UID: "fddda6dc-cca7-41a8-8be3-1e6647af2356"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.535023 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-inventory" (OuterVolumeSpecName: "inventory") pod "fddda6dc-cca7-41a8-8be3-1e6647af2356" (UID: "fddda6dc-cca7-41a8-8be3-1e6647af2356"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.595935 4660 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.595990 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.596004 4660 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-inventory\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.596017 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwdkn\" (UniqueName: \"kubernetes.io/projected/fddda6dc-cca7-41a8-8be3-1e6647af2356-kube-api-access-nwdkn\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.596031 4660 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.596043 4660 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.596057 4660 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fddda6dc-cca7-41a8-8be3-1e6647af2356-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.916900 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" event={"ID":"fddda6dc-cca7-41a8-8be3-1e6647af2356","Type":"ContainerDied","Data":"9eb13688b50028ce8e126ed11cde0cebac3521bcfeb5eb6a06abf06b55dec1db"} Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.916947 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eb13688b50028ce8e126ed11cde0cebac3521bcfeb5eb6a06abf06b55dec1db" Nov 29 08:12:36 crc kubenswrapper[4660]: I1129 08:12:36.916983 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc" Nov 29 08:13:05 crc kubenswrapper[4660]: I1129 08:13:05.499974 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:13:05 crc kubenswrapper[4660]: I1129 08:13:05.501321 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.729978 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 08:13:19 crc kubenswrapper[4660]: E1129 08:13:19.731047 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="registry-server" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.731064 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="registry-server" Nov 29 08:13:19 crc kubenswrapper[4660]: E1129 08:13:19.731082 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="extract-utilities" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.731093 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="extract-utilities" Nov 29 08:13:19 crc kubenswrapper[4660]: E1129 08:13:19.731132 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddda6dc-cca7-41a8-8be3-1e6647af2356" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.731144 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddda6dc-cca7-41a8-8be3-1e6647af2356" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 08:13:19 crc kubenswrapper[4660]: E1129 08:13:19.731163 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="extract-content" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.731171 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="extract-content" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.731392 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddda6dc-cca7-41a8-8be3-1e6647af2356" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.731406 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="165e4678-d200-401c-8764-41ba2aff9963" containerName="registry-server" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.732182 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.734430 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qfk4l" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.735015 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.735073 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.738539 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.740475 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899659 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899716 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899775 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899812 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899868 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899904 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899920 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-config-data\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.899949 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:19 crc kubenswrapper[4660]: I1129 08:13:19.900052 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq26p\" (UniqueName: \"kubernetes.io/projected/2731c762-e02a-4472-b014-19739f6c47da-kube-api-access-pq26p\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.001814 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.001878 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.001941 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.001985 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.002015 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.002066 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.002090 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-config-data\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.002131 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.002163 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq26p\" (UniqueName: \"kubernetes.io/projected/2731c762-e02a-4472-b014-19739f6c47da-kube-api-access-pq26p\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.003172 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.003196 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.003211 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.003944 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-config-data\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.004467 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.009575 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.011095 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.018844 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.026073 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq26p\" (UniqueName: \"kubernetes.io/projected/2731c762-e02a-4472-b014-19739f6c47da-kube-api-access-pq26p\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.038232 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.056557 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:13:20 crc kubenswrapper[4660]: I1129 08:13:20.562944 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 29 08:13:21 crc kubenswrapper[4660]: I1129 08:13:21.306686 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2731c762-e02a-4472-b014-19739f6c47da","Type":"ContainerStarted","Data":"63c963162f6a453d45da6561b45025ab953a90c5f987d842f3a645d46e749549"} Nov 29 08:13:35 crc kubenswrapper[4660]: I1129 08:13:35.500075 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:13:35 crc kubenswrapper[4660]: I1129 08:13:35.501356 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:13:35 crc kubenswrapper[4660]: I1129 08:13:35.501410 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:13:35 crc kubenswrapper[4660]: I1129 08:13:35.502442 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"badaf5dec387a67befbdd3691bf902d3d59081d42814af3f527ad4ccb5e03a27"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:13:35 crc kubenswrapper[4660]: I1129 08:13:35.502496 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://badaf5dec387a67befbdd3691bf902d3d59081d42814af3f527ad4ccb5e03a27" gracePeriod=600 Nov 29 08:13:36 crc kubenswrapper[4660]: I1129 08:13:36.469519 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="badaf5dec387a67befbdd3691bf902d3d59081d42814af3f527ad4ccb5e03a27" exitCode=0 Nov 29 08:13:36 crc kubenswrapper[4660]: I1129 08:13:36.469707 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"badaf5dec387a67befbdd3691bf902d3d59081d42814af3f527ad4ccb5e03a27"} Nov 29 08:13:36 crc kubenswrapper[4660]: I1129 08:13:36.470759 4660 scope.go:117] "RemoveContainer" containerID="dac7be468eae55b3209e82628a201a7f5ccf7335c15bb825e3a0c82113637cbf" Nov 29 08:14:00 crc kubenswrapper[4660]: E1129 08:14:00.826878 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 29 08:14:00 crc kubenswrapper[4660]: E1129 08:14:00.827633 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq26p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2731c762-e02a-4472-b014-19739f6c47da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:14:00 crc kubenswrapper[4660]: E1129 08:14:00.829728 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2731c762-e02a-4472-b014-19739f6c47da" Nov 29 08:14:01 crc kubenswrapper[4660]: E1129 08:14:01.695641 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2731c762-e02a-4472-b014-19739f6c47da" Nov 29 08:14:01 crc kubenswrapper[4660]: I1129 08:14:01.720118 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99"} Nov 29 08:14:14 crc kubenswrapper[4660]: I1129 08:14:14.675991 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 29 08:14:16 crc kubenswrapper[4660]: I1129 08:14:16.829064 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2731c762-e02a-4472-b014-19739f6c47da","Type":"ContainerStarted","Data":"cbed9b6fa31f31e9545353879042fde4c92db46ef8086fda2a4ebb787dd24875"} Nov 29 08:14:16 crc kubenswrapper[4660]: I1129 08:14:16.854257 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.748816873 podStartE2EDuration="58.854240728s" podCreationTimestamp="2025-11-29 08:13:18 +0000 UTC" firstStartedPulling="2025-11-29 08:13:20.567935729 +0000 UTC m=+3491.121465628" lastFinishedPulling="2025-11-29 08:14:14.673359564 +0000 UTC m=+3545.226889483" observedRunningTime="2025-11-29 08:14:16.847145826 +0000 UTC m=+3547.400675725" watchObservedRunningTime="2025-11-29 08:14:16.854240728 +0000 UTC m=+3547.407770627" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.150850 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g"] Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.153258 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.155971 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.156071 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.167622 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g"] Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.345881 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5857d09-c467-4ecc-b4fd-44ea4da8933d-config-volume\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.346226 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5857d09-c467-4ecc-b4fd-44ea4da8933d-secret-volume\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.349141 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5sd\" (UniqueName: \"kubernetes.io/projected/b5857d09-c467-4ecc-b4fd-44ea4da8933d-kube-api-access-6c5sd\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.451506 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5857d09-c467-4ecc-b4fd-44ea4da8933d-secret-volume\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.451874 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5sd\" (UniqueName: \"kubernetes.io/projected/b5857d09-c467-4ecc-b4fd-44ea4da8933d-kube-api-access-6c5sd\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.452007 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5857d09-c467-4ecc-b4fd-44ea4da8933d-config-volume\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.455544 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5857d09-c467-4ecc-b4fd-44ea4da8933d-config-volume\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.459196 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5857d09-c467-4ecc-b4fd-44ea4da8933d-secret-volume\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.488243 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5sd\" (UniqueName: \"kubernetes.io/projected/b5857d09-c467-4ecc-b4fd-44ea4da8933d-kube-api-access-6c5sd\") pod \"collect-profiles-29406735-qzr4g\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:00 crc kubenswrapper[4660]: I1129 08:15:00.529128 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:01 crc kubenswrapper[4660]: I1129 08:15:01.135563 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g"] Nov 29 08:15:01 crc kubenswrapper[4660]: I1129 08:15:01.239497 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" event={"ID":"b5857d09-c467-4ecc-b4fd-44ea4da8933d","Type":"ContainerStarted","Data":"32d6354195d2085b2886ab0bafc83a039e13c0e3c6877c247f13947feb0f0c53"} Nov 29 08:15:02 crc kubenswrapper[4660]: I1129 08:15:02.251437 4660 generic.go:334] "Generic (PLEG): container finished" podID="b5857d09-c467-4ecc-b4fd-44ea4da8933d" containerID="590b6e42821674416ed52d7017eeac274493e91a5aaaabe3f9c170acc0c5cd4d" exitCode=0 Nov 29 08:15:02 crc kubenswrapper[4660]: I1129 08:15:02.251512 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" event={"ID":"b5857d09-c467-4ecc-b4fd-44ea4da8933d","Type":"ContainerDied","Data":"590b6e42821674416ed52d7017eeac274493e91a5aaaabe3f9c170acc0c5cd4d"} Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.640218 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.816716 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c5sd\" (UniqueName: \"kubernetes.io/projected/b5857d09-c467-4ecc-b4fd-44ea4da8933d-kube-api-access-6c5sd\") pod \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.816988 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5857d09-c467-4ecc-b4fd-44ea4da8933d-config-volume\") pod \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.817157 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5857d09-c467-4ecc-b4fd-44ea4da8933d-secret-volume\") pod \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\" (UID: \"b5857d09-c467-4ecc-b4fd-44ea4da8933d\") " Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.817549 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5857d09-c467-4ecc-b4fd-44ea4da8933d-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5857d09-c467-4ecc-b4fd-44ea4da8933d" (UID: "b5857d09-c467-4ecc-b4fd-44ea4da8933d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.818431 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5857d09-c467-4ecc-b4fd-44ea4da8933d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.826012 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5857d09-c467-4ecc-b4fd-44ea4da8933d-kube-api-access-6c5sd" (OuterVolumeSpecName: "kube-api-access-6c5sd") pod "b5857d09-c467-4ecc-b4fd-44ea4da8933d" (UID: "b5857d09-c467-4ecc-b4fd-44ea4da8933d"). InnerVolumeSpecName "kube-api-access-6c5sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.835599 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5857d09-c467-4ecc-b4fd-44ea4da8933d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5857d09-c467-4ecc-b4fd-44ea4da8933d" (UID: "b5857d09-c467-4ecc-b4fd-44ea4da8933d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.920520 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5857d09-c467-4ecc-b4fd-44ea4da8933d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:03 crc kubenswrapper[4660]: I1129 08:15:03.920553 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c5sd\" (UniqueName: \"kubernetes.io/projected/b5857d09-c467-4ecc-b4fd-44ea4da8933d-kube-api-access-6c5sd\") on node \"crc\" DevicePath \"\"" Nov 29 08:15:04 crc kubenswrapper[4660]: I1129 08:15:04.267650 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" event={"ID":"b5857d09-c467-4ecc-b4fd-44ea4da8933d","Type":"ContainerDied","Data":"32d6354195d2085b2886ab0bafc83a039e13c0e3c6877c247f13947feb0f0c53"} Nov 29 08:15:04 crc kubenswrapper[4660]: I1129 08:15:04.267694 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d6354195d2085b2886ab0bafc83a039e13c0e3c6877c247f13947feb0f0c53" Nov 29 08:15:04 crc kubenswrapper[4660]: I1129 08:15:04.267711 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406735-qzr4g" Nov 29 08:15:04 crc kubenswrapper[4660]: I1129 08:15:04.709703 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns"] Nov 29 08:15:04 crc kubenswrapper[4660]: I1129 08:15:04.717278 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406690-285ns"] Nov 29 08:15:05 crc kubenswrapper[4660]: I1129 08:15:05.708631 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6137363e-6e77-46e6-b455-9a8faf6119ba" path="/var/lib/kubelet/pods/6137363e-6e77-46e6-b455-9a8faf6119ba/volumes" Nov 29 08:15:07 crc kubenswrapper[4660]: I1129 08:15:07.195039 4660 scope.go:117] "RemoveContainer" containerID="098415aca6441cf9608411e26e850ff68008bd3bb0acc628f45ed8c998d51a24" Nov 29 08:16:05 crc kubenswrapper[4660]: I1129 08:16:05.500383 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:16:05 crc kubenswrapper[4660]: I1129 08:16:05.501061 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:16:35 crc kubenswrapper[4660]: I1129 08:16:35.500030 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:16:35 crc kubenswrapper[4660]: I1129 08:16:35.500648 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:17:05 crc kubenswrapper[4660]: I1129 08:17:05.500497 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:17:05 crc kubenswrapper[4660]: I1129 08:17:05.501092 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:17:05 crc kubenswrapper[4660]: I1129 08:17:05.501144 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:17:05 crc kubenswrapper[4660]: I1129 08:17:05.501868 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:17:05 crc kubenswrapper[4660]: I1129 08:17:05.501922 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" gracePeriod=600 Nov 29 08:17:05 crc kubenswrapper[4660]: E1129 08:17:05.644332 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:17:06 crc kubenswrapper[4660]: I1129 08:17:06.393936 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" exitCode=0 Nov 29 08:17:06 crc kubenswrapper[4660]: I1129 08:17:06.393988 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99"} Nov 29 08:17:06 crc kubenswrapper[4660]: I1129 08:17:06.394031 4660 scope.go:117] "RemoveContainer" containerID="badaf5dec387a67befbdd3691bf902d3d59081d42814af3f527ad4ccb5e03a27" Nov 29 08:17:06 crc kubenswrapper[4660]: I1129 08:17:06.395565 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:17:06 crc kubenswrapper[4660]: E1129 08:17:06.396005 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:17:19 crc kubenswrapper[4660]: I1129 08:17:19.728526 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:17:19 crc kubenswrapper[4660]: E1129 08:17:19.729814 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:17:30 crc kubenswrapper[4660]: I1129 08:17:30.693263 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:17:30 crc kubenswrapper[4660]: E1129 08:17:30.694252 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:17:43 crc kubenswrapper[4660]: I1129 08:17:43.693718 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:17:43 crc kubenswrapper[4660]: E1129 08:17:43.694709 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.537381 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9hjf9"] Nov 29 08:17:54 crc kubenswrapper[4660]: E1129 08:17:54.538314 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5857d09-c467-4ecc-b4fd-44ea4da8933d" containerName="collect-profiles" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.538332 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5857d09-c467-4ecc-b4fd-44ea4da8933d" containerName="collect-profiles" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.538634 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5857d09-c467-4ecc-b4fd-44ea4da8933d" containerName="collect-profiles" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.541420 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.565548 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hjf9"] Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.619184 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-utilities\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.619292 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdwv\" (UniqueName: \"kubernetes.io/projected/24611b93-b791-4ae3-a4e5-f14fd763018a-kube-api-access-5wdwv\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.619449 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-catalog-content\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.721067 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-utilities\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.721263 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdwv\" (UniqueName: \"kubernetes.io/projected/24611b93-b791-4ae3-a4e5-f14fd763018a-kube-api-access-5wdwv\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.721335 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-catalog-content\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.721665 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-utilities\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.721913 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-catalog-content\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.740721 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdwv\" (UniqueName: \"kubernetes.io/projected/24611b93-b791-4ae3-a4e5-f14fd763018a-kube-api-access-5wdwv\") pod \"redhat-marketplace-9hjf9\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:54 crc kubenswrapper[4660]: I1129 08:17:54.864840 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:17:55 crc kubenswrapper[4660]: I1129 08:17:55.380072 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hjf9"] Nov 29 08:17:55 crc kubenswrapper[4660]: I1129 08:17:55.848188 4660 generic.go:334] "Generic (PLEG): container finished" podID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerID="d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd" exitCode=0 Nov 29 08:17:55 crc kubenswrapper[4660]: I1129 08:17:55.848494 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hjf9" event={"ID":"24611b93-b791-4ae3-a4e5-f14fd763018a","Type":"ContainerDied","Data":"d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd"} Nov 29 08:17:55 crc kubenswrapper[4660]: I1129 08:17:55.848589 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hjf9" event={"ID":"24611b93-b791-4ae3-a4e5-f14fd763018a","Type":"ContainerStarted","Data":"25d2d6ce48b1170e66857d6c9ba44785ba606a1281f6a4caa00196463ad0a320"} Nov 29 08:17:55 crc kubenswrapper[4660]: I1129 08:17:55.850227 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:17:56 crc kubenswrapper[4660]: I1129 08:17:56.868156 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hjf9" event={"ID":"24611b93-b791-4ae3-a4e5-f14fd763018a","Type":"ContainerStarted","Data":"6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626"} Nov 29 08:17:57 crc kubenswrapper[4660]: I1129 08:17:57.877403 4660 generic.go:334] "Generic (PLEG): container finished" podID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerID="6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626" exitCode=0 Nov 29 08:17:57 crc kubenswrapper[4660]: I1129 08:17:57.877506 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hjf9" event={"ID":"24611b93-b791-4ae3-a4e5-f14fd763018a","Type":"ContainerDied","Data":"6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626"} Nov 29 08:17:58 crc kubenswrapper[4660]: I1129 08:17:58.694101 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:17:58 crc kubenswrapper[4660]: E1129 08:17:58.694574 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:17:58 crc kubenswrapper[4660]: I1129 08:17:58.890133 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hjf9" event={"ID":"24611b93-b791-4ae3-a4e5-f14fd763018a","Type":"ContainerStarted","Data":"47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4"} Nov 29 08:17:58 crc kubenswrapper[4660]: I1129 08:17:58.913772 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9hjf9" podStartSLOduration=2.426429803 podStartE2EDuration="4.913754247s" podCreationTimestamp="2025-11-29 08:17:54 +0000 UTC" firstStartedPulling="2025-11-29 08:17:55.850015093 +0000 UTC m=+3766.403544992" lastFinishedPulling="2025-11-29 08:17:58.337339537 +0000 UTC m=+3768.890869436" observedRunningTime="2025-11-29 08:17:58.910986942 +0000 UTC m=+3769.464516861" watchObservedRunningTime="2025-11-29 08:17:58.913754247 +0000 UTC m=+3769.467284146" Nov 29 08:18:04 crc kubenswrapper[4660]: I1129 08:18:04.866264 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:18:04 crc kubenswrapper[4660]: I1129 08:18:04.866747 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:18:04 crc kubenswrapper[4660]: I1129 08:18:04.922722 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:18:04 crc kubenswrapper[4660]: I1129 08:18:04.996167 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:18:05 crc kubenswrapper[4660]: I1129 08:18:05.167856 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hjf9"] Nov 29 08:18:06 crc kubenswrapper[4660]: I1129 08:18:06.964886 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9hjf9" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="registry-server" containerID="cri-o://47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4" gracePeriod=2 Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.508765 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.611596 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-catalog-content\") pod \"24611b93-b791-4ae3-a4e5-f14fd763018a\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.611668 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-utilities\") pod \"24611b93-b791-4ae3-a4e5-f14fd763018a\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.611917 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wdwv\" (UniqueName: \"kubernetes.io/projected/24611b93-b791-4ae3-a4e5-f14fd763018a-kube-api-access-5wdwv\") pod \"24611b93-b791-4ae3-a4e5-f14fd763018a\" (UID: \"24611b93-b791-4ae3-a4e5-f14fd763018a\") " Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.612607 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-utilities" (OuterVolumeSpecName: "utilities") pod "24611b93-b791-4ae3-a4e5-f14fd763018a" (UID: "24611b93-b791-4ae3-a4e5-f14fd763018a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.618056 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24611b93-b791-4ae3-a4e5-f14fd763018a-kube-api-access-5wdwv" (OuterVolumeSpecName: "kube-api-access-5wdwv") pod "24611b93-b791-4ae3-a4e5-f14fd763018a" (UID: "24611b93-b791-4ae3-a4e5-f14fd763018a"). InnerVolumeSpecName "kube-api-access-5wdwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.629100 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24611b93-b791-4ae3-a4e5-f14fd763018a" (UID: "24611b93-b791-4ae3-a4e5-f14fd763018a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.714084 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wdwv\" (UniqueName: \"kubernetes.io/projected/24611b93-b791-4ae3-a4e5-f14fd763018a-kube-api-access-5wdwv\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.714118 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.714128 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611b93-b791-4ae3-a4e5-f14fd763018a-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.979803 4660 generic.go:334] "Generic (PLEG): container finished" podID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerID="47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4" exitCode=0 Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.979843 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hjf9" event={"ID":"24611b93-b791-4ae3-a4e5-f14fd763018a","Type":"ContainerDied","Data":"47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4"} Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.979868 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hjf9" event={"ID":"24611b93-b791-4ae3-a4e5-f14fd763018a","Type":"ContainerDied","Data":"25d2d6ce48b1170e66857d6c9ba44785ba606a1281f6a4caa00196463ad0a320"} Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.979884 4660 scope.go:117] "RemoveContainer" containerID="47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4" Nov 29 08:18:07 crc kubenswrapper[4660]: I1129 08:18:07.979993 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hjf9" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.001099 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hjf9"] Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.001886 4660 scope.go:117] "RemoveContainer" containerID="6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.010021 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hjf9"] Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.029602 4660 scope.go:117] "RemoveContainer" containerID="d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.083986 4660 scope.go:117] "RemoveContainer" containerID="47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4" Nov 29 08:18:08 crc kubenswrapper[4660]: E1129 08:18:08.084505 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4\": container with ID starting with 47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4 not found: ID does not exist" containerID="47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.084550 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4"} err="failed to get container status \"47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4\": rpc error: code = NotFound desc = could not find container \"47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4\": container with ID starting with 47110399d4b97f71152d8cbc7ce468f176acb9138f42b1744e58896b77f2ede4 not found: ID does not exist" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.084578 4660 scope.go:117] "RemoveContainer" containerID="6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626" Nov 29 08:18:08 crc kubenswrapper[4660]: E1129 08:18:08.085440 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626\": container with ID starting with 6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626 not found: ID does not exist" containerID="6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.085481 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626"} err="failed to get container status \"6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626\": rpc error: code = NotFound desc = could not find container \"6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626\": container with ID starting with 6b0aef45b98e05a18898dc4e3b67505af0f62c278dbfc2ecabba38c0e58e7626 not found: ID does not exist" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.085511 4660 scope.go:117] "RemoveContainer" containerID="d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd" Nov 29 08:18:08 crc kubenswrapper[4660]: E1129 08:18:08.085790 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd\": container with ID starting with d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd not found: ID does not exist" containerID="d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd" Nov 29 08:18:08 crc kubenswrapper[4660]: I1129 08:18:08.085826 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd"} err="failed to get container status \"d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd\": rpc error: code = NotFound desc = could not find container \"d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd\": container with ID starting with d4c4989f251a1dec51025edfc798544f0d943950217a42a4b13084b3d065edbd not found: ID does not exist" Nov 29 08:18:09 crc kubenswrapper[4660]: I1129 08:18:09.710653 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" path="/var/lib/kubelet/pods/24611b93-b791-4ae3-a4e5-f14fd763018a/volumes" Nov 29 08:18:10 crc kubenswrapper[4660]: I1129 08:18:10.694049 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:18:10 crc kubenswrapper[4660]: E1129 08:18:10.694487 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:18:22 crc kubenswrapper[4660]: I1129 08:18:22.693904 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:18:22 crc kubenswrapper[4660]: E1129 08:18:22.694641 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:18:37 crc kubenswrapper[4660]: I1129 08:18:37.697122 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:18:37 crc kubenswrapper[4660]: E1129 08:18:37.697894 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:18:51 crc kubenswrapper[4660]: I1129 08:18:51.693505 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:18:51 crc kubenswrapper[4660]: E1129 08:18:51.694212 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:19:06 crc kubenswrapper[4660]: I1129 08:19:06.694089 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:19:06 crc kubenswrapper[4660]: E1129 08:19:06.694796 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:19:18 crc kubenswrapper[4660]: I1129 08:19:18.694174 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:19:18 crc kubenswrapper[4660]: E1129 08:19:18.694791 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:19:32 crc kubenswrapper[4660]: I1129 08:19:32.697474 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:19:32 crc kubenswrapper[4660]: E1129 08:19:32.698223 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:19:45 crc kubenswrapper[4660]: I1129 08:19:45.693454 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:19:45 crc kubenswrapper[4660]: E1129 08:19:45.695290 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:20:00 crc kubenswrapper[4660]: I1129 08:20:00.693335 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:20:00 crc kubenswrapper[4660]: E1129 08:20:00.694103 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:20:14 crc kubenswrapper[4660]: I1129 08:20:14.693897 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:20:14 crc kubenswrapper[4660]: E1129 08:20:14.694738 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:20:26 crc kubenswrapper[4660]: I1129 08:20:26.694283 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:20:26 crc kubenswrapper[4660]: E1129 08:20:26.695507 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:20:32 crc kubenswrapper[4660]: I1129 08:20:32.363509 4660 generic.go:334] "Generic (PLEG): container finished" podID="2731c762-e02a-4472-b014-19739f6c47da" containerID="cbed9b6fa31f31e9545353879042fde4c92db46ef8086fda2a4ebb787dd24875" exitCode=0 Nov 29 08:20:32 crc kubenswrapper[4660]: I1129 08:20:32.363592 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2731c762-e02a-4472-b014-19739f6c47da","Type":"ContainerDied","Data":"cbed9b6fa31f31e9545353879042fde4c92db46ef8086fda2a4ebb787dd24875"} Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.761439 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905701 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ssh-key\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905756 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq26p\" (UniqueName: \"kubernetes.io/projected/2731c762-e02a-4472-b014-19739f6c47da-kube-api-access-pq26p\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905794 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-temporary\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905836 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-workdir\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905857 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905911 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-openstack-config\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905963 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ca-certs\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.905995 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-config-data\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.906012 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-openstack-config-secret\") pod \"2731c762-e02a-4472-b014-19739f6c47da\" (UID: \"2731c762-e02a-4472-b014-19739f6c47da\") " Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.907174 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.908225 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-config-data" (OuterVolumeSpecName: "config-data") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.911310 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.911355 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.928137 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2731c762-e02a-4472-b014-19739f6c47da-kube-api-access-pq26p" (OuterVolumeSpecName: "kube-api-access-pq26p") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "kube-api-access-pq26p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.937213 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.937864 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.945210 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:20:33 crc kubenswrapper[4660]: I1129 08:20:33.957294 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2731c762-e02a-4472-b014-19739f6c47da" (UID: "2731c762-e02a-4472-b014-19739f6c47da"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008438 4660 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008473 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq26p\" (UniqueName: \"kubernetes.io/projected/2731c762-e02a-4472-b014-19739f6c47da-kube-api-access-pq26p\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008484 4660 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008503 4660 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2731c762-e02a-4472-b014-19739f6c47da-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008545 4660 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008557 4660 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008566 4660 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008575 4660 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2731c762-e02a-4472-b014-19739f6c47da-config-data\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.008584 4660 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2731c762-e02a-4472-b014-19739f6c47da-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.029999 4660 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.110241 4660 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.386591 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2731c762-e02a-4472-b014-19739f6c47da","Type":"ContainerDied","Data":"63c963162f6a453d45da6561b45025ab953a90c5f987d842f3a645d46e749549"} Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.387004 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63c963162f6a453d45da6561b45025ab953a90c5f987d842f3a645d46e749549" Nov 29 08:20:34 crc kubenswrapper[4660]: I1129 08:20:34.387096 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 29 08:20:39 crc kubenswrapper[4660]: I1129 08:20:39.698984 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:20:39 crc kubenswrapper[4660]: E1129 08:20:39.699431 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.834725 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:20:43 crc kubenswrapper[4660]: E1129 08:20:43.835513 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="extract-content" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.835524 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="extract-content" Nov 29 08:20:43 crc kubenswrapper[4660]: E1129 08:20:43.835544 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2731c762-e02a-4472-b014-19739f6c47da" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.835550 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2731c762-e02a-4472-b014-19739f6c47da" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:20:43 crc kubenswrapper[4660]: E1129 08:20:43.835558 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="registry-server" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.835564 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="registry-server" Nov 29 08:20:43 crc kubenswrapper[4660]: E1129 08:20:43.835577 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="extract-utilities" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.835583 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="extract-utilities" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.835787 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="24611b93-b791-4ae3-a4e5-f14fd763018a" containerName="registry-server" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.835815 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="2731c762-e02a-4472-b014-19739f6c47da" containerName="tempest-tests-tempest-tests-runner" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.836771 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.839115 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qfk4l" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.847553 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.904359 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ftc\" (UniqueName: \"kubernetes.io/projected/efec8b85-1d8a-4f33-b482-a08afe9737bf-kube-api-access-22ftc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"efec8b85-1d8a-4f33-b482-a08afe9737bf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:43 crc kubenswrapper[4660]: I1129 08:20:43.904517 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"efec8b85-1d8a-4f33-b482-a08afe9737bf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:44 crc kubenswrapper[4660]: I1129 08:20:44.006158 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"efec8b85-1d8a-4f33-b482-a08afe9737bf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:44 crc kubenswrapper[4660]: I1129 08:20:44.006271 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ftc\" (UniqueName: \"kubernetes.io/projected/efec8b85-1d8a-4f33-b482-a08afe9737bf-kube-api-access-22ftc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"efec8b85-1d8a-4f33-b482-a08afe9737bf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:44 crc kubenswrapper[4660]: I1129 08:20:44.006597 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"efec8b85-1d8a-4f33-b482-a08afe9737bf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:44 crc kubenswrapper[4660]: I1129 08:20:44.027220 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ftc\" (UniqueName: \"kubernetes.io/projected/efec8b85-1d8a-4f33-b482-a08afe9737bf-kube-api-access-22ftc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"efec8b85-1d8a-4f33-b482-a08afe9737bf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:44 crc kubenswrapper[4660]: I1129 08:20:44.040675 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"efec8b85-1d8a-4f33-b482-a08afe9737bf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:44 crc kubenswrapper[4660]: I1129 08:20:44.170383 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 29 08:20:44 crc kubenswrapper[4660]: I1129 08:20:44.634496 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 29 08:20:44 crc kubenswrapper[4660]: W1129 08:20:44.649885 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefec8b85_1d8a_4f33_b482_a08afe9737bf.slice/crio-f91c27c0606f7ce82d59f63cd4063827e23edd61e4db3e09fd51ae98b532f9ea WatchSource:0}: Error finding container f91c27c0606f7ce82d59f63cd4063827e23edd61e4db3e09fd51ae98b532f9ea: Status 404 returned error can't find the container with id f91c27c0606f7ce82d59f63cd4063827e23edd61e4db3e09fd51ae98b532f9ea Nov 29 08:20:45 crc kubenswrapper[4660]: I1129 08:20:45.504103 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"efec8b85-1d8a-4f33-b482-a08afe9737bf","Type":"ContainerStarted","Data":"f91c27c0606f7ce82d59f63cd4063827e23edd61e4db3e09fd51ae98b532f9ea"} Nov 29 08:20:46 crc kubenswrapper[4660]: I1129 08:20:46.521838 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"efec8b85-1d8a-4f33-b482-a08afe9737bf","Type":"ContainerStarted","Data":"63696311fd2746fe9a31bb494219031da8cc40fc0a19c73cf64edb1e7f3108b8"} Nov 29 08:20:46 crc kubenswrapper[4660]: I1129 08:20:46.547480 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.679187269 podStartE2EDuration="3.547458395s" podCreationTimestamp="2025-11-29 08:20:43 +0000 UTC" firstStartedPulling="2025-11-29 08:20:44.652540345 +0000 UTC m=+3935.206070244" lastFinishedPulling="2025-11-29 08:20:45.520811471 +0000 UTC m=+3936.074341370" observedRunningTime="2025-11-29 08:20:46.540410314 +0000 UTC m=+3937.093940223" watchObservedRunningTime="2025-11-29 08:20:46.547458395 +0000 UTC m=+3937.100988314" Nov 29 08:20:53 crc kubenswrapper[4660]: I1129 08:20:53.693374 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:20:53 crc kubenswrapper[4660]: E1129 08:20:53.694192 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:21:07 crc kubenswrapper[4660]: I1129 08:21:07.693673 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:21:07 crc kubenswrapper[4660]: E1129 08:21:07.694563 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.743517 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd6qz/must-gather-8j26b"] Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.745500 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.750759 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd6qz"/"openshift-service-ca.crt" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.757672 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd6qz"/"kube-root-ca.crt" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.775404 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd6qz/must-gather-8j26b"] Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.842046 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65ndx\" (UniqueName: \"kubernetes.io/projected/bd729116-e36e-43fe-bdae-388a4bcb0976-kube-api-access-65ndx\") pod \"must-gather-8j26b\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.842207 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd729116-e36e-43fe-bdae-388a4bcb0976-must-gather-output\") pod \"must-gather-8j26b\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.942884 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd729116-e36e-43fe-bdae-388a4bcb0976-must-gather-output\") pod \"must-gather-8j26b\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.942978 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65ndx\" (UniqueName: \"kubernetes.io/projected/bd729116-e36e-43fe-bdae-388a4bcb0976-kube-api-access-65ndx\") pod \"must-gather-8j26b\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:10 crc kubenswrapper[4660]: I1129 08:21:10.943383 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd729116-e36e-43fe-bdae-388a4bcb0976-must-gather-output\") pod \"must-gather-8j26b\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:11 crc kubenswrapper[4660]: I1129 08:21:11.000521 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65ndx\" (UniqueName: \"kubernetes.io/projected/bd729116-e36e-43fe-bdae-388a4bcb0976-kube-api-access-65ndx\") pod \"must-gather-8j26b\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:11 crc kubenswrapper[4660]: I1129 08:21:11.060795 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:21:11 crc kubenswrapper[4660]: I1129 08:21:11.639527 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd6qz/must-gather-8j26b"] Nov 29 08:21:11 crc kubenswrapper[4660]: W1129 08:21:11.640498 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd729116_e36e_43fe_bdae_388a4bcb0976.slice/crio-4c205367bf04f21dd9dba2ec170bbe5d47ccb49b2217eb1580416e22905bc747 WatchSource:0}: Error finding container 4c205367bf04f21dd9dba2ec170bbe5d47ccb49b2217eb1580416e22905bc747: Status 404 returned error can't find the container with id 4c205367bf04f21dd9dba2ec170bbe5d47ccb49b2217eb1580416e22905bc747 Nov 29 08:21:11 crc kubenswrapper[4660]: I1129 08:21:11.782279 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/must-gather-8j26b" event={"ID":"bd729116-e36e-43fe-bdae-388a4bcb0976","Type":"ContainerStarted","Data":"4c205367bf04f21dd9dba2ec170bbe5d47ccb49b2217eb1580416e22905bc747"} Nov 29 08:21:17 crc kubenswrapper[4660]: I1129 08:21:17.839035 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/must-gather-8j26b" event={"ID":"bd729116-e36e-43fe-bdae-388a4bcb0976","Type":"ContainerStarted","Data":"7dfbde036d9fd3ce3598752ea21a1ed62c8a1b3c166c74779bb741cede975038"} Nov 29 08:21:17 crc kubenswrapper[4660]: I1129 08:21:17.839570 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/must-gather-8j26b" event={"ID":"bd729116-e36e-43fe-bdae-388a4bcb0976","Type":"ContainerStarted","Data":"fb378b62c6155db125d9f97935d62f38769da551038b7f7a99c4fb024578e785"} Nov 29 08:21:17 crc kubenswrapper[4660]: I1129 08:21:17.863933 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd6qz/must-gather-8j26b" podStartSLOduration=3.308261259 podStartE2EDuration="7.863913977s" podCreationTimestamp="2025-11-29 08:21:10 +0000 UTC" firstStartedPulling="2025-11-29 08:21:11.646887642 +0000 UTC m=+3962.200417541" lastFinishedPulling="2025-11-29 08:21:16.20254036 +0000 UTC m=+3966.756070259" observedRunningTime="2025-11-29 08:21:17.856054983 +0000 UTC m=+3968.409584882" watchObservedRunningTime="2025-11-29 08:21:17.863913977 +0000 UTC m=+3968.417443876" Nov 29 08:21:20 crc kubenswrapper[4660]: I1129 08:21:20.693857 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:21:20 crc kubenswrapper[4660]: E1129 08:21:20.694506 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.305910 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-d6nmr"] Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.307455 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.309331 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bd6qz"/"default-dockercfg-xngrn" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.443997 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-host\") pod \"crc-debug-d6nmr\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.444175 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2872r\" (UniqueName: \"kubernetes.io/projected/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-kube-api-access-2872r\") pod \"crc-debug-d6nmr\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.546373 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2872r\" (UniqueName: \"kubernetes.io/projected/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-kube-api-access-2872r\") pod \"crc-debug-d6nmr\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.547263 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-host\") pod \"crc-debug-d6nmr\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.547419 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-host\") pod \"crc-debug-d6nmr\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.568414 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2872r\" (UniqueName: \"kubernetes.io/projected/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-kube-api-access-2872r\") pod \"crc-debug-d6nmr\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.624136 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:21:21 crc kubenswrapper[4660]: I1129 08:21:21.871038 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" event={"ID":"c516f7de-a9fe-4aa7-9d1c-c5b115213d88","Type":"ContainerStarted","Data":"c2c034222153a138d3f6544568b96f4e0637da4d653a20627de7b4dba466a33d"} Nov 29 08:21:23 crc kubenswrapper[4660]: I1129 08:21:23.746173 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c2bcx"] Nov 29 08:21:23 crc kubenswrapper[4660]: I1129 08:21:23.750095 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:23 crc kubenswrapper[4660]: I1129 08:21:23.757723 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2bcx"] Nov 29 08:21:23 crc kubenswrapper[4660]: I1129 08:21:23.899803 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-catalog-content\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:23 crc kubenswrapper[4660]: I1129 08:21:23.900052 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-utilities\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:23 crc kubenswrapper[4660]: I1129 08:21:23.900246 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmngg\" (UniqueName: \"kubernetes.io/projected/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-kube-api-access-xmngg\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.001928 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmngg\" (UniqueName: \"kubernetes.io/projected/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-kube-api-access-xmngg\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.002294 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-catalog-content\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.002314 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-utilities\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.002856 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-utilities\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.003169 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-catalog-content\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.046805 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmngg\" (UniqueName: \"kubernetes.io/projected/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-kube-api-access-xmngg\") pod \"redhat-operators-c2bcx\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.136780 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.661907 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2bcx"] Nov 29 08:21:24 crc kubenswrapper[4660]: W1129 08:21:24.669260 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c26dfec_e0b4_4e54_9c12_9a117ac0596c.slice/crio-36548cd6ccc0cf9ea73e2ca704b0654b6e1c7c97853d85f0d99c85d8450422e4 WatchSource:0}: Error finding container 36548cd6ccc0cf9ea73e2ca704b0654b6e1c7c97853d85f0d99c85d8450422e4: Status 404 returned error can't find the container with id 36548cd6ccc0cf9ea73e2ca704b0654b6e1c7c97853d85f0d99c85d8450422e4 Nov 29 08:21:24 crc kubenswrapper[4660]: I1129 08:21:24.908339 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2bcx" event={"ID":"8c26dfec-e0b4-4e54-9c12-9a117ac0596c","Type":"ContainerStarted","Data":"36548cd6ccc0cf9ea73e2ca704b0654b6e1c7c97853d85f0d99c85d8450422e4"} Nov 29 08:21:25 crc kubenswrapper[4660]: I1129 08:21:25.928095 4660 generic.go:334] "Generic (PLEG): container finished" podID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerID="f397edeabb2ab7fbb8268f920f8ad2c950dee6f825d61c5bfe27f1bf593c5257" exitCode=0 Nov 29 08:21:25 crc kubenswrapper[4660]: I1129 08:21:25.928298 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2bcx" event={"ID":"8c26dfec-e0b4-4e54-9c12-9a117ac0596c","Type":"ContainerDied","Data":"f397edeabb2ab7fbb8268f920f8ad2c950dee6f825d61c5bfe27f1bf593c5257"} Nov 29 08:21:26 crc kubenswrapper[4660]: I1129 08:21:26.945968 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2bcx" event={"ID":"8c26dfec-e0b4-4e54-9c12-9a117ac0596c","Type":"ContainerStarted","Data":"23b36c62966d685c10bac49bdb385df003283296dbb9e4bd4704b544f73bb6b9"} Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.093673 4660 generic.go:334] "Generic (PLEG): container finished" podID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerID="23b36c62966d685c10bac49bdb385df003283296dbb9e4bd4704b544f73bb6b9" exitCode=0 Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.093770 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2bcx" event={"ID":"8c26dfec-e0b4-4e54-9c12-9a117ac0596c","Type":"ContainerDied","Data":"23b36c62966d685c10bac49bdb385df003283296dbb9e4bd4704b544f73bb6b9"} Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.374221 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lrp6"] Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.376092 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.400464 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lrp6"] Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.490017 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9ml\" (UniqueName: \"kubernetes.io/projected/6818313a-1cf8-40f2-a828-94e013b2f460-kube-api-access-cs9ml\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.490239 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-utilities\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.490283 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-catalog-content\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.592313 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-utilities\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.592574 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-catalog-content\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.592617 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9ml\" (UniqueName: \"kubernetes.io/projected/6818313a-1cf8-40f2-a828-94e013b2f460-kube-api-access-cs9ml\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.593314 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-utilities\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.593531 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-catalog-content\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.612765 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9ml\" (UniqueName: \"kubernetes.io/projected/6818313a-1cf8-40f2-a828-94e013b2f460-kube-api-access-cs9ml\") pod \"certified-operators-7lrp6\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.709651 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.927586 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cnts7"] Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.938670 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:33 crc kubenswrapper[4660]: I1129 08:21:33.953730 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cnts7"] Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.126131 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-catalog-content\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.126233 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbrrj\" (UniqueName: \"kubernetes.io/projected/28ac8b4a-cd23-4026-96fe-72841d0ceed8-kube-api-access-mbrrj\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.127056 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-utilities\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.228588 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-utilities\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.228739 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-catalog-content\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.228833 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbrrj\" (UniqueName: \"kubernetes.io/projected/28ac8b4a-cd23-4026-96fe-72841d0ceed8-kube-api-access-mbrrj\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.229200 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-utilities\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.229209 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-catalog-content\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.269316 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbrrj\" (UniqueName: \"kubernetes.io/projected/28ac8b4a-cd23-4026-96fe-72841d0ceed8-kube-api-access-mbrrj\") pod \"community-operators-cnts7\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:34 crc kubenswrapper[4660]: I1129 08:21:34.284873 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:35 crc kubenswrapper[4660]: I1129 08:21:35.697471 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:21:35 crc kubenswrapper[4660]: E1129 08:21:35.697957 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:21:39 crc kubenswrapper[4660]: E1129 08:21:39.894697 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 29 08:21:39 crc kubenswrapper[4660]: E1129 08:21:39.895596 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2872r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-d6nmr_openshift-must-gather-bd6qz(c516f7de-a9fe-4aa7-9d1c-c5b115213d88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 29 08:21:39 crc kubenswrapper[4660]: E1129 08:21:39.899756 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" podUID="c516f7de-a9fe-4aa7-9d1c-c5b115213d88" Nov 29 08:21:40 crc kubenswrapper[4660]: I1129 08:21:40.062255 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lrp6"] Nov 29 08:21:40 crc kubenswrapper[4660]: I1129 08:21:40.092095 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cnts7"] Nov 29 08:21:40 crc kubenswrapper[4660]: I1129 08:21:40.175951 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lrp6" event={"ID":"6818313a-1cf8-40f2-a828-94e013b2f460","Type":"ContainerStarted","Data":"7f5a70098efec1bad32359ed1c179592e56b9fb5123b040104ba44acaaf1b11f"} Nov 29 08:21:40 crc kubenswrapper[4660]: I1129 08:21:40.177429 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnts7" event={"ID":"28ac8b4a-cd23-4026-96fe-72841d0ceed8","Type":"ContainerStarted","Data":"49104e693b35fa8f15dc45127c529c75fa66a6156ffcbc9692c8a50090e84589"} Nov 29 08:21:40 crc kubenswrapper[4660]: E1129 08:21:40.184487 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" podUID="c516f7de-a9fe-4aa7-9d1c-c5b115213d88" Nov 29 08:21:41 crc kubenswrapper[4660]: I1129 08:21:41.186911 4660 generic.go:334] "Generic (PLEG): container finished" podID="6818313a-1cf8-40f2-a828-94e013b2f460" containerID="70ac80b5b5b3fa8fca6b99b233f8cc97b9979774e6284c84cec55ff64d807fc2" exitCode=0 Nov 29 08:21:41 crc kubenswrapper[4660]: I1129 08:21:41.187012 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lrp6" event={"ID":"6818313a-1cf8-40f2-a828-94e013b2f460","Type":"ContainerDied","Data":"70ac80b5b5b3fa8fca6b99b233f8cc97b9979774e6284c84cec55ff64d807fc2"} Nov 29 08:21:41 crc kubenswrapper[4660]: I1129 08:21:41.191230 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2bcx" event={"ID":"8c26dfec-e0b4-4e54-9c12-9a117ac0596c","Type":"ContainerStarted","Data":"b6f95c582f9c520c5742e4f7d902159f7f1efce9f3f48b868baabafda7bc6c38"} Nov 29 08:21:41 crc kubenswrapper[4660]: I1129 08:21:41.193047 4660 generic.go:334] "Generic (PLEG): container finished" podID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerID="89a101894d0a13c9a6c4eef108eee4470f5619aa6a26cec9258371b3e12946f3" exitCode=0 Nov 29 08:21:41 crc kubenswrapper[4660]: I1129 08:21:41.193069 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnts7" event={"ID":"28ac8b4a-cd23-4026-96fe-72841d0ceed8","Type":"ContainerDied","Data":"89a101894d0a13c9a6c4eef108eee4470f5619aa6a26cec9258371b3e12946f3"} Nov 29 08:21:41 crc kubenswrapper[4660]: I1129 08:21:41.283481 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c2bcx" podStartSLOduration=4.19675073 podStartE2EDuration="18.28346399s" podCreationTimestamp="2025-11-29 08:21:23 +0000 UTC" firstStartedPulling="2025-11-29 08:21:25.932177906 +0000 UTC m=+3976.485707815" lastFinishedPulling="2025-11-29 08:21:40.018891176 +0000 UTC m=+3990.572421075" observedRunningTime="2025-11-29 08:21:41.262094081 +0000 UTC m=+3991.815623990" watchObservedRunningTime="2025-11-29 08:21:41.28346399 +0000 UTC m=+3991.836993889" Nov 29 08:21:42 crc kubenswrapper[4660]: I1129 08:21:42.203640 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnts7" event={"ID":"28ac8b4a-cd23-4026-96fe-72841d0ceed8","Type":"ContainerStarted","Data":"d56e5092cd39928d4e44c195fbe7d8c27a5454bcfd59ac5becfc0125bd54095d"} Nov 29 08:21:42 crc kubenswrapper[4660]: I1129 08:21:42.206753 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lrp6" event={"ID":"6818313a-1cf8-40f2-a828-94e013b2f460","Type":"ContainerStarted","Data":"4a379ff694ab0fbd6903c77cf739a61a4fb9c1c9fe4921e6e5872655418d6e36"} Nov 29 08:21:44 crc kubenswrapper[4660]: I1129 08:21:44.137228 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:44 crc kubenswrapper[4660]: I1129 08:21:44.139226 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:21:44 crc kubenswrapper[4660]: I1129 08:21:44.229535 4660 generic.go:334] "Generic (PLEG): container finished" podID="6818313a-1cf8-40f2-a828-94e013b2f460" containerID="4a379ff694ab0fbd6903c77cf739a61a4fb9c1c9fe4921e6e5872655418d6e36" exitCode=0 Nov 29 08:21:44 crc kubenswrapper[4660]: I1129 08:21:44.229597 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lrp6" event={"ID":"6818313a-1cf8-40f2-a828-94e013b2f460","Type":"ContainerDied","Data":"4a379ff694ab0fbd6903c77cf739a61a4fb9c1c9fe4921e6e5872655418d6e36"} Nov 29 08:21:45 crc kubenswrapper[4660]: I1129 08:21:45.191062 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c2bcx" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="registry-server" probeResult="failure" output=< Nov 29 08:21:45 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 08:21:45 crc kubenswrapper[4660]: > Nov 29 08:21:46 crc kubenswrapper[4660]: I1129 08:21:46.252933 4660 generic.go:334] "Generic (PLEG): container finished" podID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerID="d56e5092cd39928d4e44c195fbe7d8c27a5454bcfd59ac5becfc0125bd54095d" exitCode=0 Nov 29 08:21:46 crc kubenswrapper[4660]: I1129 08:21:46.252941 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnts7" event={"ID":"28ac8b4a-cd23-4026-96fe-72841d0ceed8","Type":"ContainerDied","Data":"d56e5092cd39928d4e44c195fbe7d8c27a5454bcfd59ac5becfc0125bd54095d"} Nov 29 08:21:47 crc kubenswrapper[4660]: I1129 08:21:47.263341 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnts7" event={"ID":"28ac8b4a-cd23-4026-96fe-72841d0ceed8","Type":"ContainerStarted","Data":"fb6747fe110c2facb83f0f42423fd4e0700ac1c696597b4d30b4dc7fda51e6ed"} Nov 29 08:21:47 crc kubenswrapper[4660]: I1129 08:21:47.265485 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lrp6" event={"ID":"6818313a-1cf8-40f2-a828-94e013b2f460","Type":"ContainerStarted","Data":"cc9d1d044061a050ca4e51159031c3ac224b52e9381e534b84aa0be93f97a0e2"} Nov 29 08:21:47 crc kubenswrapper[4660]: I1129 08:21:47.292880 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cnts7" podStartSLOduration=8.72081574 podStartE2EDuration="14.292864683s" podCreationTimestamp="2025-11-29 08:21:33 +0000 UTC" firstStartedPulling="2025-11-29 08:21:41.193993998 +0000 UTC m=+3991.747523897" lastFinishedPulling="2025-11-29 08:21:46.766042941 +0000 UTC m=+3997.319572840" observedRunningTime="2025-11-29 08:21:47.289909943 +0000 UTC m=+3997.843439842" watchObservedRunningTime="2025-11-29 08:21:47.292864683 +0000 UTC m=+3997.846394582" Nov 29 08:21:47 crc kubenswrapper[4660]: I1129 08:21:47.318026 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lrp6" podStartSLOduration=9.224467964 podStartE2EDuration="14.318001494s" podCreationTimestamp="2025-11-29 08:21:33 +0000 UTC" firstStartedPulling="2025-11-29 08:21:41.188686924 +0000 UTC m=+3991.742216823" lastFinishedPulling="2025-11-29 08:21:46.282220454 +0000 UTC m=+3996.835750353" observedRunningTime="2025-11-29 08:21:47.314377236 +0000 UTC m=+3997.867907135" watchObservedRunningTime="2025-11-29 08:21:47.318001494 +0000 UTC m=+3997.871531393" Nov 29 08:21:48 crc kubenswrapper[4660]: I1129 08:21:48.693032 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:21:48 crc kubenswrapper[4660]: E1129 08:21:48.693462 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:21:53 crc kubenswrapper[4660]: I1129 08:21:53.349312 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" event={"ID":"c516f7de-a9fe-4aa7-9d1c-c5b115213d88","Type":"ContainerStarted","Data":"265cf4e433e7cb18244060449eeddd894e034e912a4a5860837779d90131c217"} Nov 29 08:21:53 crc kubenswrapper[4660]: I1129 08:21:53.382054 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" podStartSLOduration=1.885942539 podStartE2EDuration="32.382034016s" podCreationTimestamp="2025-11-29 08:21:21 +0000 UTC" firstStartedPulling="2025-11-29 08:21:21.684856575 +0000 UTC m=+3972.238386464" lastFinishedPulling="2025-11-29 08:21:52.180948042 +0000 UTC m=+4002.734477941" observedRunningTime="2025-11-29 08:21:53.36922016 +0000 UTC m=+4003.922750059" watchObservedRunningTime="2025-11-29 08:21:53.382034016 +0000 UTC m=+4003.935563915" Nov 29 08:21:53 crc kubenswrapper[4660]: I1129 08:21:53.710587 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:53 crc kubenswrapper[4660]: I1129 08:21:53.710682 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:53 crc kubenswrapper[4660]: I1129 08:21:53.764364 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:54 crc kubenswrapper[4660]: I1129 08:21:54.285334 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:54 crc kubenswrapper[4660]: I1129 08:21:54.287007 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:54 crc kubenswrapper[4660]: I1129 08:21:54.338095 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:54 crc kubenswrapper[4660]: I1129 08:21:54.435255 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:54 crc kubenswrapper[4660]: I1129 08:21:54.467470 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:55 crc kubenswrapper[4660]: I1129 08:21:55.193105 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c2bcx" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="registry-server" probeResult="failure" output=< Nov 29 08:21:55 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 08:21:55 crc kubenswrapper[4660]: > Nov 29 08:21:55 crc kubenswrapper[4660]: I1129 08:21:55.357953 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cnts7"] Nov 29 08:21:56 crc kubenswrapper[4660]: I1129 08:21:56.375601 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cnts7" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="registry-server" containerID="cri-o://fb6747fe110c2facb83f0f42423fd4e0700ac1c696597b4d30b4dc7fda51e6ed" gracePeriod=2 Nov 29 08:21:56 crc kubenswrapper[4660]: I1129 08:21:56.779215 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lrp6"] Nov 29 08:21:56 crc kubenswrapper[4660]: I1129 08:21:56.779898 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lrp6" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="registry-server" containerID="cri-o://cc9d1d044061a050ca4e51159031c3ac224b52e9381e534b84aa0be93f97a0e2" gracePeriod=2 Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.389407 4660 generic.go:334] "Generic (PLEG): container finished" podID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerID="fb6747fe110c2facb83f0f42423fd4e0700ac1c696597b4d30b4dc7fda51e6ed" exitCode=0 Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.389487 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnts7" event={"ID":"28ac8b4a-cd23-4026-96fe-72841d0ceed8","Type":"ContainerDied","Data":"fb6747fe110c2facb83f0f42423fd4e0700ac1c696597b4d30b4dc7fda51e6ed"} Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.389512 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnts7" event={"ID":"28ac8b4a-cd23-4026-96fe-72841d0ceed8","Type":"ContainerDied","Data":"49104e693b35fa8f15dc45127c529c75fa66a6156ffcbc9692c8a50090e84589"} Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.389523 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49104e693b35fa8f15dc45127c529c75fa66a6156ffcbc9692c8a50090e84589" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.413889 4660 generic.go:334] "Generic (PLEG): container finished" podID="6818313a-1cf8-40f2-a828-94e013b2f460" containerID="cc9d1d044061a050ca4e51159031c3ac224b52e9381e534b84aa0be93f97a0e2" exitCode=0 Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.413932 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lrp6" event={"ID":"6818313a-1cf8-40f2-a828-94e013b2f460","Type":"ContainerDied","Data":"cc9d1d044061a050ca4e51159031c3ac224b52e9381e534b84aa0be93f97a0e2"} Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.472017 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.598244 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-catalog-content\") pod \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.598516 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-utilities\") pod \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.598553 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbrrj\" (UniqueName: \"kubernetes.io/projected/28ac8b4a-cd23-4026-96fe-72841d0ceed8-kube-api-access-mbrrj\") pod \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\" (UID: \"28ac8b4a-cd23-4026-96fe-72841d0ceed8\") " Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.600963 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-utilities" (OuterVolumeSpecName: "utilities") pod "28ac8b4a-cd23-4026-96fe-72841d0ceed8" (UID: "28ac8b4a-cd23-4026-96fe-72841d0ceed8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.611213 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ac8b4a-cd23-4026-96fe-72841d0ceed8-kube-api-access-mbrrj" (OuterVolumeSpecName: "kube-api-access-mbrrj") pod "28ac8b4a-cd23-4026-96fe-72841d0ceed8" (UID: "28ac8b4a-cd23-4026-96fe-72841d0ceed8"). InnerVolumeSpecName "kube-api-access-mbrrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.669705 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28ac8b4a-cd23-4026-96fe-72841d0ceed8" (UID: "28ac8b4a-cd23-4026-96fe-72841d0ceed8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.703090 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.703121 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbrrj\" (UniqueName: \"kubernetes.io/projected/28ac8b4a-cd23-4026-96fe-72841d0ceed8-kube-api-access-mbrrj\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.703133 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ac8b4a-cd23-4026-96fe-72841d0ceed8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.714111 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.806975 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-catalog-content\") pod \"6818313a-1cf8-40f2-a828-94e013b2f460\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.807135 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-utilities\") pod \"6818313a-1cf8-40f2-a828-94e013b2f460\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.807385 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9ml\" (UniqueName: \"kubernetes.io/projected/6818313a-1cf8-40f2-a828-94e013b2f460-kube-api-access-cs9ml\") pod \"6818313a-1cf8-40f2-a828-94e013b2f460\" (UID: \"6818313a-1cf8-40f2-a828-94e013b2f460\") " Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.808971 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-utilities" (OuterVolumeSpecName: "utilities") pod "6818313a-1cf8-40f2-a828-94e013b2f460" (UID: "6818313a-1cf8-40f2-a828-94e013b2f460"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.884786 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6818313a-1cf8-40f2-a828-94e013b2f460" (UID: "6818313a-1cf8-40f2-a828-94e013b2f460"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.890070 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6818313a-1cf8-40f2-a828-94e013b2f460-kube-api-access-cs9ml" (OuterVolumeSpecName: "kube-api-access-cs9ml") pod "6818313a-1cf8-40f2-a828-94e013b2f460" (UID: "6818313a-1cf8-40f2-a828-94e013b2f460"). InnerVolumeSpecName "kube-api-access-cs9ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.913391 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.913435 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6818313a-1cf8-40f2-a828-94e013b2f460-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:57 crc kubenswrapper[4660]: I1129 08:21:57.913445 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs9ml\" (UniqueName: \"kubernetes.io/projected/6818313a-1cf8-40f2-a828-94e013b2f460-kube-api-access-cs9ml\") on node \"crc\" DevicePath \"\"" Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.426985 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnts7" Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.428182 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lrp6" Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.428216 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lrp6" event={"ID":"6818313a-1cf8-40f2-a828-94e013b2f460","Type":"ContainerDied","Data":"7f5a70098efec1bad32359ed1c179592e56b9fb5123b040104ba44acaaf1b11f"} Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.428309 4660 scope.go:117] "RemoveContainer" containerID="cc9d1d044061a050ca4e51159031c3ac224b52e9381e534b84aa0be93f97a0e2" Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.462151 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cnts7"] Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.476123 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cnts7"] Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.488730 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lrp6"] Nov 29 08:21:58 crc kubenswrapper[4660]: I1129 08:21:58.502128 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lrp6"] Nov 29 08:21:59 crc kubenswrapper[4660]: I1129 08:21:59.704797 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" path="/var/lib/kubelet/pods/28ac8b4a-cd23-4026-96fe-72841d0ceed8/volumes" Nov 29 08:21:59 crc kubenswrapper[4660]: I1129 08:21:59.706398 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" path="/var/lib/kubelet/pods/6818313a-1cf8-40f2-a828-94e013b2f460/volumes" Nov 29 08:22:00 crc kubenswrapper[4660]: I1129 08:22:00.601647 4660 scope.go:117] "RemoveContainer" containerID="4a379ff694ab0fbd6903c77cf739a61a4fb9c1c9fe4921e6e5872655418d6e36" Nov 29 08:22:00 crc kubenswrapper[4660]: I1129 08:22:00.688718 4660 scope.go:117] "RemoveContainer" containerID="70ac80b5b5b3fa8fca6b99b233f8cc97b9979774e6284c84cec55ff64d807fc2" Nov 29 08:22:02 crc kubenswrapper[4660]: I1129 08:22:02.694217 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:22:02 crc kubenswrapper[4660]: E1129 08:22:02.695546 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:22:04 crc kubenswrapper[4660]: I1129 08:22:04.577270 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:22:04 crc kubenswrapper[4660]: I1129 08:22:04.633125 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:22:05 crc kubenswrapper[4660]: I1129 08:22:05.578355 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2bcx"] Nov 29 08:22:06 crc kubenswrapper[4660]: I1129 08:22:06.499925 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c2bcx" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="registry-server" containerID="cri-o://b6f95c582f9c520c5742e4f7d902159f7f1efce9f3f48b868baabafda7bc6c38" gracePeriod=2 Nov 29 08:22:07 crc kubenswrapper[4660]: I1129 08:22:07.513541 4660 generic.go:334] "Generic (PLEG): container finished" podID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerID="b6f95c582f9c520c5742e4f7d902159f7f1efce9f3f48b868baabafda7bc6c38" exitCode=0 Nov 29 08:22:07 crc kubenswrapper[4660]: I1129 08:22:07.513940 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2bcx" event={"ID":"8c26dfec-e0b4-4e54-9c12-9a117ac0596c","Type":"ContainerDied","Data":"b6f95c582f9c520c5742e4f7d902159f7f1efce9f3f48b868baabafda7bc6c38"} Nov 29 08:22:07 crc kubenswrapper[4660]: I1129 08:22:07.514174 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2bcx" event={"ID":"8c26dfec-e0b4-4e54-9c12-9a117ac0596c","Type":"ContainerDied","Data":"36548cd6ccc0cf9ea73e2ca704b0654b6e1c7c97853d85f0d99c85d8450422e4"} Nov 29 08:22:07 crc kubenswrapper[4660]: I1129 08:22:07.514191 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36548cd6ccc0cf9ea73e2ca704b0654b6e1c7c97853d85f0d99c85d8450422e4" Nov 29 08:22:07 crc kubenswrapper[4660]: I1129 08:22:07.923585 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.007305 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-catalog-content\") pod \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.007430 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-utilities\") pod \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.007454 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmngg\" (UniqueName: \"kubernetes.io/projected/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-kube-api-access-xmngg\") pod \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\" (UID: \"8c26dfec-e0b4-4e54-9c12-9a117ac0596c\") " Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.008106 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-utilities" (OuterVolumeSpecName: "utilities") pod "8c26dfec-e0b4-4e54-9c12-9a117ac0596c" (UID: "8c26dfec-e0b4-4e54-9c12-9a117ac0596c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.009061 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.041341 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-kube-api-access-xmngg" (OuterVolumeSpecName: "kube-api-access-xmngg") pod "8c26dfec-e0b4-4e54-9c12-9a117ac0596c" (UID: "8c26dfec-e0b4-4e54-9c12-9a117ac0596c"). InnerVolumeSpecName "kube-api-access-xmngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.110423 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmngg\" (UniqueName: \"kubernetes.io/projected/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-kube-api-access-xmngg\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.167436 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c26dfec-e0b4-4e54-9c12-9a117ac0596c" (UID: "8c26dfec-e0b4-4e54-9c12-9a117ac0596c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.211925 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c26dfec-e0b4-4e54-9c12-9a117ac0596c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.523070 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2bcx" Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.564787 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2bcx"] Nov 29 08:22:08 crc kubenswrapper[4660]: I1129 08:22:08.576003 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c2bcx"] Nov 29 08:22:09 crc kubenswrapper[4660]: I1129 08:22:09.706016 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" path="/var/lib/kubelet/pods/8c26dfec-e0b4-4e54-9c12-9a117ac0596c/volumes" Nov 29 08:22:17 crc kubenswrapper[4660]: I1129 08:22:17.694800 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:22:18 crc kubenswrapper[4660]: I1129 08:22:18.680150 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"7491638fb3efde4e7767c268a33badef3c42cb700e7526935e1de64c7b71e8a1"} Nov 29 08:22:33 crc kubenswrapper[4660]: I1129 08:22:33.888555 4660 generic.go:334] "Generic (PLEG): container finished" podID="c516f7de-a9fe-4aa7-9d1c-c5b115213d88" containerID="265cf4e433e7cb18244060449eeddd894e034e912a4a5860837779d90131c217" exitCode=0 Nov 29 08:22:33 crc kubenswrapper[4660]: I1129 08:22:33.888689 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" event={"ID":"c516f7de-a9fe-4aa7-9d1c-c5b115213d88","Type":"ContainerDied","Data":"265cf4e433e7cb18244060449eeddd894e034e912a4a5860837779d90131c217"} Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.002479 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.052975 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-d6nmr"] Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.061674 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-d6nmr"] Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.139636 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2872r\" (UniqueName: \"kubernetes.io/projected/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-kube-api-access-2872r\") pod \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.139755 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-host\") pod \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\" (UID: \"c516f7de-a9fe-4aa7-9d1c-c5b115213d88\") " Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.140088 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-host" (OuterVolumeSpecName: "host") pod "c516f7de-a9fe-4aa7-9d1c-c5b115213d88" (UID: "c516f7de-a9fe-4aa7-9d1c-c5b115213d88"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.145938 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-kube-api-access-2872r" (OuterVolumeSpecName: "kube-api-access-2872r") pod "c516f7de-a9fe-4aa7-9d1c-c5b115213d88" (UID: "c516f7de-a9fe-4aa7-9d1c-c5b115213d88"). InnerVolumeSpecName "kube-api-access-2872r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.242184 4660 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.242483 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2872r\" (UniqueName: \"kubernetes.io/projected/c516f7de-a9fe-4aa7-9d1c-c5b115213d88-kube-api-access-2872r\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.705754 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c516f7de-a9fe-4aa7-9d1c-c5b115213d88" path="/var/lib/kubelet/pods/c516f7de-a9fe-4aa7-9d1c-c5b115213d88/volumes" Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.912676 4660 scope.go:117] "RemoveContainer" containerID="265cf4e433e7cb18244060449eeddd894e034e912a4a5860837779d90131c217" Nov 29 08:22:35 crc kubenswrapper[4660]: I1129 08:22:35.912764 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-d6nmr" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.253868 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-z4h7j"] Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254690 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="extract-utilities" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254706 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="extract-utilities" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254743 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c516f7de-a9fe-4aa7-9d1c-c5b115213d88" containerName="container-00" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254751 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c516f7de-a9fe-4aa7-9d1c-c5b115213d88" containerName="container-00" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254763 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254772 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254788 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="extract-utilities" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254797 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="extract-utilities" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254811 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="extract-content" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254821 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="extract-content" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254835 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="extract-content" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254843 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="extract-content" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254861 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254869 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254891 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="extract-content" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254901 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="extract-content" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254922 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254930 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: E1129 08:22:36.254944 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="extract-utilities" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.254952 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="extract-utilities" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.255210 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6818313a-1cf8-40f2-a828-94e013b2f460" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.255245 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ac8b4a-cd23-4026-96fe-72841d0ceed8" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.255261 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c516f7de-a9fe-4aa7-9d1c-c5b115213d88" containerName="container-00" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.255293 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c26dfec-e0b4-4e54-9c12-9a117ac0596c" containerName="registry-server" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.256062 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.258627 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bd6qz"/"default-dockercfg-xngrn" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.365909 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldzxt\" (UniqueName: \"kubernetes.io/projected/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-kube-api-access-ldzxt\") pod \"crc-debug-z4h7j\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.366112 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-host\") pod \"crc-debug-z4h7j\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.468189 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldzxt\" (UniqueName: \"kubernetes.io/projected/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-kube-api-access-ldzxt\") pod \"crc-debug-z4h7j\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.468315 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-host\") pod \"crc-debug-z4h7j\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.468452 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-host\") pod \"crc-debug-z4h7j\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.499443 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldzxt\" (UniqueName: \"kubernetes.io/projected/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-kube-api-access-ldzxt\") pod \"crc-debug-z4h7j\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.580530 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:36 crc kubenswrapper[4660]: I1129 08:22:36.922989 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" event={"ID":"4f59a04b-94ab-44ca-a5e0-74350c0d17bd","Type":"ContainerStarted","Data":"85b857ad3700a6af0ed94e35f65161e0568db0a0eff6b9597b7fcfae746c9a65"} Nov 29 08:22:37 crc kubenswrapper[4660]: I1129 08:22:37.934212 4660 generic.go:334] "Generic (PLEG): container finished" podID="4f59a04b-94ab-44ca-a5e0-74350c0d17bd" containerID="dca0c0721510b88064bf23e6954442ee1d729956328a8609a7b61126ed0ee026" exitCode=0 Nov 29 08:22:37 crc kubenswrapper[4660]: I1129 08:22:37.934307 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" event={"ID":"4f59a04b-94ab-44ca-a5e0-74350c0d17bd","Type":"ContainerDied","Data":"dca0c0721510b88064bf23e6954442ee1d729956328a8609a7b61126ed0ee026"} Nov 29 08:22:38 crc kubenswrapper[4660]: I1129 08:22:38.440340 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-z4h7j"] Nov 29 08:22:38 crc kubenswrapper[4660]: I1129 08:22:38.448347 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-z4h7j"] Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.063212 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.121941 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldzxt\" (UniqueName: \"kubernetes.io/projected/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-kube-api-access-ldzxt\") pod \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.122012 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-host\") pod \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\" (UID: \"4f59a04b-94ab-44ca-a5e0-74350c0d17bd\") " Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.122135 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-host" (OuterVolumeSpecName: "host") pod "4f59a04b-94ab-44ca-a5e0-74350c0d17bd" (UID: "4f59a04b-94ab-44ca-a5e0-74350c0d17bd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.122572 4660 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.128800 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-kube-api-access-ldzxt" (OuterVolumeSpecName: "kube-api-access-ldzxt") pod "4f59a04b-94ab-44ca-a5e0-74350c0d17bd" (UID: "4f59a04b-94ab-44ca-a5e0-74350c0d17bd"). InnerVolumeSpecName "kube-api-access-ldzxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.224869 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldzxt\" (UniqueName: \"kubernetes.io/projected/4f59a04b-94ab-44ca-a5e0-74350c0d17bd-kube-api-access-ldzxt\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.608376 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-nt9vs"] Nov 29 08:22:39 crc kubenswrapper[4660]: E1129 08:22:39.608815 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f59a04b-94ab-44ca-a5e0-74350c0d17bd" containerName="container-00" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.608837 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f59a04b-94ab-44ca-a5e0-74350c0d17bd" containerName="container-00" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.609076 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f59a04b-94ab-44ca-a5e0-74350c0d17bd" containerName="container-00" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.609683 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.703487 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f59a04b-94ab-44ca-a5e0-74350c0d17bd" path="/var/lib/kubelet/pods/4f59a04b-94ab-44ca-a5e0-74350c0d17bd/volumes" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.733497 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e2a59b3-36ce-457a-8ee5-d39a762685b5-host\") pod \"crc-debug-nt9vs\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.733665 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46q2j\" (UniqueName: \"kubernetes.io/projected/7e2a59b3-36ce-457a-8ee5-d39a762685b5-kube-api-access-46q2j\") pod \"crc-debug-nt9vs\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.835743 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46q2j\" (UniqueName: \"kubernetes.io/projected/7e2a59b3-36ce-457a-8ee5-d39a762685b5-kube-api-access-46q2j\") pod \"crc-debug-nt9vs\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.835947 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e2a59b3-36ce-457a-8ee5-d39a762685b5-host\") pod \"crc-debug-nt9vs\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.836166 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e2a59b3-36ce-457a-8ee5-d39a762685b5-host\") pod \"crc-debug-nt9vs\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.857536 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46q2j\" (UniqueName: \"kubernetes.io/projected/7e2a59b3-36ce-457a-8ee5-d39a762685b5-kube-api-access-46q2j\") pod \"crc-debug-nt9vs\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.926416 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.953357 4660 scope.go:117] "RemoveContainer" containerID="dca0c0721510b88064bf23e6954442ee1d729956328a8609a7b61126ed0ee026" Nov 29 08:22:39 crc kubenswrapper[4660]: I1129 08:22:39.953389 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-z4h7j" Nov 29 08:22:39 crc kubenswrapper[4660]: W1129 08:22:39.963557 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e2a59b3_36ce_457a_8ee5_d39a762685b5.slice/crio-a1530fd77b4cd4aaa87d2ad726b671814d2f7c45fd11f723afac384a24991cea WatchSource:0}: Error finding container a1530fd77b4cd4aaa87d2ad726b671814d2f7c45fd11f723afac384a24991cea: Status 404 returned error can't find the container with id a1530fd77b4cd4aaa87d2ad726b671814d2f7c45fd11f723afac384a24991cea Nov 29 08:22:40 crc kubenswrapper[4660]: I1129 08:22:40.975254 4660 generic.go:334] "Generic (PLEG): container finished" podID="7e2a59b3-36ce-457a-8ee5-d39a762685b5" containerID="6cfe23c40bab1fd04f8ecb8e0910411280a9f7cc299af99e1cfa9beffaf5f3bb" exitCode=0 Nov 29 08:22:40 crc kubenswrapper[4660]: I1129 08:22:40.975347 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" event={"ID":"7e2a59b3-36ce-457a-8ee5-d39a762685b5","Type":"ContainerDied","Data":"6cfe23c40bab1fd04f8ecb8e0910411280a9f7cc299af99e1cfa9beffaf5f3bb"} Nov 29 08:22:40 crc kubenswrapper[4660]: I1129 08:22:40.975717 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" event={"ID":"7e2a59b3-36ce-457a-8ee5-d39a762685b5","Type":"ContainerStarted","Data":"a1530fd77b4cd4aaa87d2ad726b671814d2f7c45fd11f723afac384a24991cea"} Nov 29 08:22:41 crc kubenswrapper[4660]: I1129 08:22:41.024703 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-nt9vs"] Nov 29 08:22:41 crc kubenswrapper[4660]: I1129 08:22:41.032115 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd6qz/crc-debug-nt9vs"] Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.755393 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.791749 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46q2j\" (UniqueName: \"kubernetes.io/projected/7e2a59b3-36ce-457a-8ee5-d39a762685b5-kube-api-access-46q2j\") pod \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.792163 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e2a59b3-36ce-457a-8ee5-d39a762685b5-host\") pod \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\" (UID: \"7e2a59b3-36ce-457a-8ee5-d39a762685b5\") " Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.793717 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e2a59b3-36ce-457a-8ee5-d39a762685b5-host" (OuterVolumeSpecName: "host") pod "7e2a59b3-36ce-457a-8ee5-d39a762685b5" (UID: "7e2a59b3-36ce-457a-8ee5-d39a762685b5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.887421 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2a59b3-36ce-457a-8ee5-d39a762685b5-kube-api-access-46q2j" (OuterVolumeSpecName: "kube-api-access-46q2j") pod "7e2a59b3-36ce-457a-8ee5-d39a762685b5" (UID: "7e2a59b3-36ce-457a-8ee5-d39a762685b5"). InnerVolumeSpecName "kube-api-access-46q2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.894980 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46q2j\" (UniqueName: \"kubernetes.io/projected/7e2a59b3-36ce-457a-8ee5-d39a762685b5-kube-api-access-46q2j\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.895239 4660 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e2a59b3-36ce-457a-8ee5-d39a762685b5-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.996084 4660 scope.go:117] "RemoveContainer" containerID="6cfe23c40bab1fd04f8ecb8e0910411280a9f7cc299af99e1cfa9beffaf5f3bb" Nov 29 08:22:42 crc kubenswrapper[4660]: I1129 08:22:42.996148 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/crc-debug-nt9vs" Nov 29 08:22:43 crc kubenswrapper[4660]: I1129 08:22:43.706305 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2a59b3-36ce-457a-8ee5-d39a762685b5" path="/var/lib/kubelet/pods/7e2a59b3-36ce-457a-8ee5-d39a762685b5/volumes" Nov 29 08:23:00 crc kubenswrapper[4660]: I1129 08:23:00.167821 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-795c6b768d-rnj8x_f92699d7-37a0-4093-81b8-ddb680ca5263/barbican-api/0.log" Nov 29 08:23:00 crc kubenswrapper[4660]: I1129 08:23:00.407298 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65db494558-68jff_25ca5104-7d38-40bc-aa55-19dbd28b40f3/barbican-keystone-listener/0.log" Nov 29 08:23:00 crc kubenswrapper[4660]: I1129 08:23:00.707173 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65db494558-68jff_25ca5104-7d38-40bc-aa55-19dbd28b40f3/barbican-keystone-listener-log/0.log" Nov 29 08:23:00 crc kubenswrapper[4660]: I1129 08:23:00.746245 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f844c8dbc-j8g6j_b1f5216f-e274-4987-b2cc-98effb9661eb/barbican-worker/0.log" Nov 29 08:23:00 crc kubenswrapper[4660]: I1129 08:23:00.851874 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-795c6b768d-rnj8x_f92699d7-37a0-4093-81b8-ddb680ca5263/barbican-api-log/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.119319 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f844c8dbc-j8g6j_b1f5216f-e274-4987-b2cc-98effb9661eb/barbican-worker-log/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.136703 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh_92f06c4a-45f4-4542-b502-210d08515f70/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.299580 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/ceilometer-central-agent/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.377046 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/proxy-httpd/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.438152 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/ceilometer-notification-agent/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.459728 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/sg-core/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.604395 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b2bdc67-626d-4aa5-94ff-d413be98dc7c/cinder-api/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.720870 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b2bdc67-626d-4aa5-94ff-d413be98dc7c/cinder-api-log/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.845163 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b255ded3-2849-4f46-bb45-5c2485862b55/cinder-scheduler/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.915818 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b255ded3-2849-4f46-bb45-5c2485862b55/probe/0.log" Nov 29 08:23:01 crc kubenswrapper[4660]: I1129 08:23:01.991819 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz_8d0ffb5c-54ae-48a8-9448-7b78f45814a7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:02 crc kubenswrapper[4660]: I1129 08:23:02.596767 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9_b6e39886-2df6-4257-babe-441252581041/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:02 crc kubenswrapper[4660]: I1129 08:23:02.651266 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-wl6mp_f13d98c7-68bf-4e21-936e-115f586f1dff/init/0.log" Nov 29 08:23:02 crc kubenswrapper[4660]: I1129 08:23:02.853286 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-wl6mp_f13d98c7-68bf-4e21-936e-115f586f1dff/init/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.012101 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-wl6mp_f13d98c7-68bf-4e21-936e-115f586f1dff/dnsmasq-dns/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.045291 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5895t_c2698bcc-7e72-4b53-8bbf-9d71b4720148/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.248119 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e45b487-ff42-480a-a6a2-803949758e7a/glance-log/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.263726 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e45b487-ff42-480a-a6a2-803949758e7a/glance-httpd/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.527265 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2ec421d-c491-4c1f-9f9d-ec260df3cc87/glance-httpd/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.559796 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2ec421d-c491-4c1f-9f9d-ec260df3cc87/glance-log/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.790984 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d8477fd94-v56g5_953f9580-5907-45bf-ae44-e48149acc44c/horizon/0.log" Nov 29 08:23:03 crc kubenswrapper[4660]: I1129 08:23:03.946745 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv_93142c96-03e4-4441-a738-407379eeb07f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:04 crc kubenswrapper[4660]: I1129 08:23:04.058070 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d8477fd94-v56g5_953f9580-5907-45bf-ae44-e48149acc44c/horizon-log/0.log" Nov 29 08:23:04 crc kubenswrapper[4660]: I1129 08:23:04.107855 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-thxwk_955fb591-0de6-4f55-a61f-fc232791fe54/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:04 crc kubenswrapper[4660]: I1129 08:23:04.367030 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406721-p2zgx_a2ce58ac-319c-47df-b44b-8958659262f8/keystone-cron/0.log" Nov 29 08:23:04 crc kubenswrapper[4660]: I1129 08:23:04.434126 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54fd458c48-wjcjs_4696d01f-aadd-46fb-b966-f67035bb6ba4/keystone-api/0.log" Nov 29 08:23:04 crc kubenswrapper[4660]: I1129 08:23:04.561724 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d65ebb5a-68a4-4848-8093-92d49f373550/kube-state-metrics/0.log" Nov 29 08:23:04 crc kubenswrapper[4660]: I1129 08:23:04.758890 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm_d0e385e9-5832-4dae-832e-5e155dd48813/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:05 crc kubenswrapper[4660]: I1129 08:23:05.014057 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d5bfc6bd5-zc4q8_b601f952-5ec7-401c-b639-01245efb2379/neutron-httpd/0.log" Nov 29 08:23:05 crc kubenswrapper[4660]: I1129 08:23:05.049270 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d5bfc6bd5-zc4q8_b601f952-5ec7-401c-b639-01245efb2379/neutron-api/0.log" Nov 29 08:23:05 crc kubenswrapper[4660]: I1129 08:23:05.173088 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj_f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:05 crc kubenswrapper[4660]: I1129 08:23:05.526371 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2bdf1a62-5e19-4a99-9950-3208cdb8cd0b/nova-api-log/0.log" Nov 29 08:23:05 crc kubenswrapper[4660]: I1129 08:23:05.766745 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fd768c12-7e2d-4283-a390-0f17185cb3ca/nova-cell0-conductor-conductor/0.log" Nov 29 08:23:05 crc kubenswrapper[4660]: I1129 08:23:05.873490 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2bdf1a62-5e19-4a99-9950-3208cdb8cd0b/nova-api-api/0.log" Nov 29 08:23:05 crc kubenswrapper[4660]: I1129 08:23:05.979261 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6933c9c1-60f6-4099-982d-22b279546662/nova-cell1-conductor-conductor/0.log" Nov 29 08:23:06 crc kubenswrapper[4660]: I1129 08:23:06.197827 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_544cff03-d589-4ba8-ac61-e5976fe393d9/nova-cell1-novncproxy-novncproxy/0.log" Nov 29 08:23:06 crc kubenswrapper[4660]: I1129 08:23:06.448675 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-flbrk_f4ebec6a-7674-4948-94b8-51d4f1e6de90/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:06 crc kubenswrapper[4660]: I1129 08:23:06.591190 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8fbec32-e360-48a4-802f-acafba9315fc/nova-metadata-log/0.log" Nov 29 08:23:06 crc kubenswrapper[4660]: I1129 08:23:06.839594 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f0b8bc00-d486-430f-ad6d-483e3372519b/nova-scheduler-scheduler/0.log" Nov 29 08:23:07 crc kubenswrapper[4660]: I1129 08:23:07.479145 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4a1c83c7-2cac-4b54-90c4-080b7f50cd7f/mysql-bootstrap/0.log" Nov 29 08:23:07 crc kubenswrapper[4660]: I1129 08:23:07.664486 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4a1c83c7-2cac-4b54-90c4-080b7f50cd7f/mysql-bootstrap/0.log" Nov 29 08:23:07 crc kubenswrapper[4660]: I1129 08:23:07.799112 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4a1c83c7-2cac-4b54-90c4-080b7f50cd7f/galera/0.log" Nov 29 08:23:07 crc kubenswrapper[4660]: I1129 08:23:07.904400 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910/mysql-bootstrap/0.log" Nov 29 08:23:07 crc kubenswrapper[4660]: I1129 08:23:07.969808 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8fbec32-e360-48a4-802f-acafba9315fc/nova-metadata-metadata/0.log" Nov 29 08:23:08 crc kubenswrapper[4660]: I1129 08:23:08.243649 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910/mysql-bootstrap/0.log" Nov 29 08:23:08 crc kubenswrapper[4660]: I1129 08:23:08.302181 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d541b23c-6413-4bee-834c-96e5d46a9155/openstackclient/0.log" Nov 29 08:23:08 crc kubenswrapper[4660]: I1129 08:23:08.325035 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910/galera/0.log" Nov 29 08:23:08 crc kubenswrapper[4660]: I1129 08:23:08.600812 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wkgc6_3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81/openstack-network-exporter/0.log" Nov 29 08:23:08 crc kubenswrapper[4660]: I1129 08:23:08.629687 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovsdb-server-init/0.log" Nov 29 08:23:09 crc kubenswrapper[4660]: I1129 08:23:09.125414 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovs-vswitchd/0.log" Nov 29 08:23:09 crc kubenswrapper[4660]: I1129 08:23:09.160121 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovsdb-server-init/0.log" Nov 29 08:23:09 crc kubenswrapper[4660]: I1129 08:23:09.224965 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovsdb-server/0.log" Nov 29 08:23:09 crc kubenswrapper[4660]: I1129 08:23:09.404451 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xdz26_a75569c9-ce83-4515-894c-b067e01f3d9b/ovn-controller/0.log" Nov 29 08:23:09 crc kubenswrapper[4660]: I1129 08:23:09.609550 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-llf58_62033900-fce1-44ce-9b4b-44d61b45123c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:09 crc kubenswrapper[4660]: I1129 08:23:09.773489 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a17c15c7-a4af-4447-b315-8558385d4449/openstack-network-exporter/0.log" Nov 29 08:23:09 crc kubenswrapper[4660]: I1129 08:23:09.853583 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a17c15c7-a4af-4447-b315-8558385d4449/ovn-northd/0.log" Nov 29 08:23:10 crc kubenswrapper[4660]: I1129 08:23:10.004362 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_825a377f-a7b3-4a9c-a39c-8e3086eb554f/openstack-network-exporter/0.log" Nov 29 08:23:10 crc kubenswrapper[4660]: I1129 08:23:10.082972 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_825a377f-a7b3-4a9c-a39c-8e3086eb554f/ovsdbserver-nb/0.log" Nov 29 08:23:10 crc kubenswrapper[4660]: I1129 08:23:10.278970 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d07487c-33de-4aa4-9878-bcdd17e2a1d9/ovsdbserver-sb/0.log" Nov 29 08:23:10 crc kubenswrapper[4660]: I1129 08:23:10.365899 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d07487c-33de-4aa4-9878-bcdd17e2a1d9/openstack-network-exporter/0.log" Nov 29 08:23:10 crc kubenswrapper[4660]: I1129 08:23:10.553408 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4c5f6f9b-h8nfr_c8432d67-8b8a-43f4-96b5-e852610f702c/placement-api/0.log" Nov 29 08:23:10 crc kubenswrapper[4660]: I1129 08:23:10.599910 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4c5f6f9b-h8nfr_c8432d67-8b8a-43f4-96b5-e852610f702c/placement-log/0.log" Nov 29 08:23:11 crc kubenswrapper[4660]: I1129 08:23:11.263997 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_147cd78f-2d01-48d5-b43b-eda3532cf537/setup-container/0.log" Nov 29 08:23:11 crc kubenswrapper[4660]: I1129 08:23:11.612966 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_147cd78f-2d01-48d5-b43b-eda3532cf537/setup-container/0.log" Nov 29 08:23:11 crc kubenswrapper[4660]: I1129 08:23:11.729765 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_147cd78f-2d01-48d5-b43b-eda3532cf537/rabbitmq/0.log" Nov 29 08:23:11 crc kubenswrapper[4660]: I1129 08:23:11.744844 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b51d872c-13ff-4e5a-9c3b-dc644c7c19d6/setup-container/0.log" Nov 29 08:23:11 crc kubenswrapper[4660]: I1129 08:23:11.951772 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b51d872c-13ff-4e5a-9c3b-dc644c7c19d6/setup-container/0.log" Nov 29 08:23:11 crc kubenswrapper[4660]: I1129 08:23:11.968034 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b51d872c-13ff-4e5a-9c3b-dc644c7c19d6/rabbitmq/0.log" Nov 29 08:23:12 crc kubenswrapper[4660]: I1129 08:23:12.189451 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x_36029f28-c187-4b77-afda-fd74d56bd1c5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:12 crc kubenswrapper[4660]: I1129 08:23:12.297693 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-497rm_7917f022-eed4-4622-a10e-82a72f068b29/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:12 crc kubenswrapper[4660]: I1129 08:23:12.431469 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp_f8c2c2ad-2cee-414f-a0df-76351f87c6e0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:12 crc kubenswrapper[4660]: I1129 08:23:12.665817 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tff2b_6a6ef616-fee3-4bcb-acef-c63943b96e22/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:12 crc kubenswrapper[4660]: I1129 08:23:12.783707 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hcf98_4118c243-9402-4481-abdd-0a5d0581415b/ssh-known-hosts-edpm-deployment/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.063900 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75ddc44955-xj8mn_27a79873-e3bd-4172-b5c3-17a981a9a091/proxy-server/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.078054 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75ddc44955-xj8mn_27a79873-e3bd-4172-b5c3-17a981a9a091/proxy-httpd/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.313478 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5xg97_d487e762-0eca-4f42-aae2-1b8674868db1/swift-ring-rebalance/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.354743 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-auditor/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.446041 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-reaper/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.688021 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-replicator/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.711742 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-server/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.727813 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-replicator/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.731684 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-auditor/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.974363 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-server/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.984270 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-updater/0.log" Nov 29 08:23:13 crc kubenswrapper[4660]: I1129 08:23:13.996344 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-auditor/0.log" Nov 29 08:23:14 crc kubenswrapper[4660]: I1129 08:23:14.049378 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-expirer/0.log" Nov 29 08:23:14 crc kubenswrapper[4660]: I1129 08:23:14.290598 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-replicator/0.log" Nov 29 08:23:14 crc kubenswrapper[4660]: I1129 08:23:14.320787 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-updater/0.log" Nov 29 08:23:14 crc kubenswrapper[4660]: I1129 08:23:14.341746 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-server/0.log" Nov 29 08:23:14 crc kubenswrapper[4660]: I1129 08:23:14.390979 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/rsync/0.log" Nov 29 08:23:14 crc kubenswrapper[4660]: I1129 08:23:14.642471 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/swift-recon-cron/0.log" Nov 29 08:23:14 crc kubenswrapper[4660]: I1129 08:23:14.975287 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc_fddda6dc-cca7-41a8-8be3-1e6647af2356/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:15 crc kubenswrapper[4660]: I1129 08:23:15.215746 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2731c762-e02a-4472-b014-19739f6c47da/tempest-tests-tempest-tests-runner/0.log" Nov 29 08:23:15 crc kubenswrapper[4660]: I1129 08:23:15.277428 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_efec8b85-1d8a-4f33-b482-a08afe9737bf/test-operator-logs-container/0.log" Nov 29 08:23:15 crc kubenswrapper[4660]: I1129 08:23:15.410270 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9rwss_fb67bcf4-d0ed-4dbb-b571-322a52c4c43f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:23:22 crc kubenswrapper[4660]: I1129 08:23:22.770435 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_46c3b1d2-02f5-4632-bf44-648754c2e83c/memcached/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.097793 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59d587b55-wqktr_f0b999b3-e302-40ca-a1aa-5173b5655498/kube-rbac-proxy/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.207090 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59d587b55-wqktr_f0b999b3-e302-40ca-a1aa-5173b5655498/manager/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.352187 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cmgp5_0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd/kube-rbac-proxy/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.434816 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cmgp5_0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd/manager/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.559910 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-jdqzs_81afdf1a-a8f8-4f69-8824-192bcf14424c/kube-rbac-proxy/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.630326 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-jdqzs_81afdf1a-a8f8-4f69-8824-192bcf14424c/manager/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.691713 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/util/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.960358 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/util/0.log" Nov 29 08:23:46 crc kubenswrapper[4660]: I1129 08:23:46.964847 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/pull/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.002763 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/pull/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.220124 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/util/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.371493 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/pull/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.447980 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/extract/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.634574 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4gjhw_7ce83127-45e9-4a96-8815-538f3bde77ed/manager/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.659910 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4gjhw_7ce83127-45e9-4a96-8815-538f3bde77ed/kube-rbac-proxy/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.831988 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v9rs2_29c0443d-0d08-4708-b268-07ae28680e01/kube-rbac-proxy/0.log" Nov 29 08:23:47 crc kubenswrapper[4660]: I1129 08:23:47.959527 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v9rs2_29c0443d-0d08-4708-b268-07ae28680e01/manager/0.log" Nov 29 08:23:48 crc kubenswrapper[4660]: I1129 08:23:48.093536 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-cwb2d_d2a4ddee-42a4-451d-9bd7-3028e4680d47/kube-rbac-proxy/0.log" Nov 29 08:23:48 crc kubenswrapper[4660]: I1129 08:23:48.209301 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-cwb2d_d2a4ddee-42a4-451d-9bd7-3028e4680d47/manager/0.log" Nov 29 08:23:48 crc kubenswrapper[4660]: I1129 08:23:48.346151 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-vrqgm_a6e93136-e20e-4070-ae0d-db82c3d2b464/kube-rbac-proxy/0.log" Nov 29 08:23:48 crc kubenswrapper[4660]: I1129 08:23:48.579628 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-2mb85_edf52fa0-02fe-49d3-8368-fe26598027ec/kube-rbac-proxy/0.log" Nov 29 08:23:48 crc kubenswrapper[4660]: I1129 08:23:48.648089 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-vrqgm_a6e93136-e20e-4070-ae0d-db82c3d2b464/manager/0.log" Nov 29 08:23:48 crc kubenswrapper[4660]: I1129 08:23:48.651510 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-2mb85_edf52fa0-02fe-49d3-8368-fe26598027ec/manager/0.log" Nov 29 08:23:48 crc kubenswrapper[4660]: I1129 08:23:48.985526 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-b2rlk_96a424c4-d4f3-49c2-94a3-20d236cb207d/manager/0.log" Nov 29 08:23:49 crc kubenswrapper[4660]: I1129 08:23:49.350431 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-b2rlk_96a424c4-d4f3-49c2-94a3-20d236cb207d/kube-rbac-proxy/0.log" Nov 29 08:23:49 crc kubenswrapper[4660]: I1129 08:23:49.556594 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-v9g26_08635026-10f5-4929-b9f5-b5d6fcac6d28/manager/0.log" Nov 29 08:23:49 crc kubenswrapper[4660]: I1129 08:23:49.599241 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-v9g26_08635026-10f5-4929-b9f5-b5d6fcac6d28/kube-rbac-proxy/0.log" Nov 29 08:23:49 crc kubenswrapper[4660]: I1129 08:23:49.670456 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-7446l_c0579e8a-66e1-4b7c-aaf8-435d07e6e98d/kube-rbac-proxy/0.log" Nov 29 08:23:49 crc kubenswrapper[4660]: I1129 08:23:49.788665 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-7446l_c0579e8a-66e1-4b7c-aaf8-435d07e6e98d/manager/0.log" Nov 29 08:23:49 crc kubenswrapper[4660]: I1129 08:23:49.935844 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8cnzr_b191bd3e-cd1b-43c8-99c4-54701a29dfda/kube-rbac-proxy/0.log" Nov 29 08:23:50 crc kubenswrapper[4660]: I1129 08:23:50.080959 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8cnzr_b191bd3e-cd1b-43c8-99c4-54701a29dfda/manager/0.log" Nov 29 08:23:50 crc kubenswrapper[4660]: I1129 08:23:50.299878 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-t82nj_1688cfe7-0002-4b5c-916b-ca18c9519de3/manager/0.log" Nov 29 08:23:50 crc kubenswrapper[4660]: I1129 08:23:50.376300 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-t82nj_1688cfe7-0002-4b5c-916b-ca18c9519de3/kube-rbac-proxy/0.log" Nov 29 08:23:50 crc kubenswrapper[4660]: I1129 08:23:50.558536 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6c2m6_2badc2b5-6bdb-44b6-8d54-f8763fe78fd6/kube-rbac-proxy/0.log" Nov 29 08:23:50 crc kubenswrapper[4660]: I1129 08:23:50.609637 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6c2m6_2badc2b5-6bdb-44b6-8d54-f8763fe78fd6/manager/0.log" Nov 29 08:23:50 crc kubenswrapper[4660]: I1129 08:23:50.835298 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd49blbh_02680922-54f1-494d-a32d-e01b82b9cfd2/kube-rbac-proxy/0.log" Nov 29 08:23:50 crc kubenswrapper[4660]: I1129 08:23:50.912350 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd49blbh_02680922-54f1-494d-a32d-e01b82b9cfd2/manager/0.log" Nov 29 08:23:51 crc kubenswrapper[4660]: I1129 08:23:51.230632 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6hcx4_8afce996-f777-4ef3-a57d-d09faabc1b46/registry-server/0.log" Nov 29 08:23:51 crc kubenswrapper[4660]: I1129 08:23:51.361597 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-f9fd8cd-p4sd2_b9aba585-e5b4-47a1-904b-f3f1f86d6251/operator/0.log" Nov 29 08:23:51 crc kubenswrapper[4660]: I1129 08:23:51.506303 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-z5n6s_eb02d6d1-14c5-409f-8c54-60e35f909a84/kube-rbac-proxy/0.log" Nov 29 08:23:51 crc kubenswrapper[4660]: I1129 08:23:51.507046 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-z5n6s_eb02d6d1-14c5-409f-8c54-60e35f909a84/manager/0.log" Nov 29 08:23:51 crc kubenswrapper[4660]: I1129 08:23:51.863983 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-95ndx_d56ee9fc-8151-4442-b491-1e5c8faf48c4/manager/0.log" Nov 29 08:23:51 crc kubenswrapper[4660]: I1129 08:23:51.948865 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-95ndx_d56ee9fc-8151-4442-b491-1e5c8faf48c4/kube-rbac-proxy/0.log" Nov 29 08:23:51 crc kubenswrapper[4660]: I1129 08:23:51.994594 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7fb5f7cfbf-7dwbm_e676373b-cd82-4455-ae35-62c31e458d5d/manager/0.log" Nov 29 08:23:52 crc kubenswrapper[4660]: I1129 08:23:52.092300 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8vp89_9ee27942-cb74-4ee0-b4b9-9f995b6604a4/operator/0.log" Nov 29 08:23:52 crc kubenswrapper[4660]: I1129 08:23:52.144410 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-724c7_e512b840-83f6-47dc-b5ed-669807cc2878/manager/0.log" Nov 29 08:23:52 crc kubenswrapper[4660]: I1129 08:23:52.178171 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-724c7_e512b840-83f6-47dc-b5ed-669807cc2878/kube-rbac-proxy/0.log" Nov 29 08:23:52 crc kubenswrapper[4660]: I1129 08:23:52.328110 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-4zn9g_01080af3-022a-430c-a9cc-b9b98f5214de/kube-rbac-proxy/0.log" Nov 29 08:23:52 crc kubenswrapper[4660]: I1129 08:23:52.434408 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-4zn9g_01080af3-022a-430c-a9cc-b9b98f5214de/manager/0.log" Nov 29 08:23:52 crc kubenswrapper[4660]: I1129 08:23:52.470563 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-mw22w_e0c70c45-673e-47e6-80cd-99bbfbe6e695/kube-rbac-proxy/0.log" Nov 29 08:23:53 crc kubenswrapper[4660]: I1129 08:23:53.037247 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-mw22w_e0c70c45-673e-47e6-80cd-99bbfbe6e695/manager/0.log" Nov 29 08:23:53 crc kubenswrapper[4660]: I1129 08:23:53.085622 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hsm2_4747fced-480f-4185-b4e3-2dedd7f05614/kube-rbac-proxy/0.log" Nov 29 08:23:53 crc kubenswrapper[4660]: I1129 08:23:53.101666 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hsm2_4747fced-480f-4185-b4e3-2dedd7f05614/manager/0.log" Nov 29 08:24:15 crc kubenswrapper[4660]: I1129 08:24:15.626101 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-znn4f_a6fc6ac1-6b93-4e45-a741-9df933ea2d11/control-plane-machine-set-operator/0.log" Nov 29 08:24:15 crc kubenswrapper[4660]: I1129 08:24:15.824021 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7j5ts_133a42bf-5cdf-4614-8a42-4ce3e350481e/kube-rbac-proxy/0.log" Nov 29 08:24:15 crc kubenswrapper[4660]: I1129 08:24:15.881876 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7j5ts_133a42bf-5cdf-4614-8a42-4ce3e350481e/machine-api-operator/0.log" Nov 29 08:24:30 crc kubenswrapper[4660]: I1129 08:24:30.611779 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-vsxjs_c7d3889b-9b53-40ae-9a2e-39e7080e11c9/cert-manager-controller/0.log" Nov 29 08:24:30 crc kubenswrapper[4660]: I1129 08:24:30.705517 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-s4hsk_bb0d8d41-b2d2-432b-865f-0069bd153d0a/cert-manager-cainjector/0.log" Nov 29 08:24:30 crc kubenswrapper[4660]: I1129 08:24:30.782840 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-b4x7w_fd8f5350-5025-49b7-85c6-5f7c1d5724a7/cert-manager-webhook/0.log" Nov 29 08:24:35 crc kubenswrapper[4660]: I1129 08:24:35.500245 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:24:35 crc kubenswrapper[4660]: I1129 08:24:35.501780 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:24:45 crc kubenswrapper[4660]: I1129 08:24:45.603186 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-54kd5_1f69a645-8449-4c71-abdb-2d9a1413eae0/nmstate-console-plugin/0.log" Nov 29 08:24:45 crc kubenswrapper[4660]: I1129 08:24:45.837129 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hzjdq_00b98def-8412-4510-a607-30ea7c13600d/nmstate-handler/0.log" Nov 29 08:24:45 crc kubenswrapper[4660]: I1129 08:24:45.960325 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gxczb_abb40e0e-8d39-4ede-a762-2968c5ae46a1/nmstate-metrics/0.log" Nov 29 08:24:46 crc kubenswrapper[4660]: I1129 08:24:46.028399 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gxczb_abb40e0e-8d39-4ede-a762-2968c5ae46a1/kube-rbac-proxy/0.log" Nov 29 08:24:46 crc kubenswrapper[4660]: I1129 08:24:46.108185 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-bcpsd_8355ccfb-5f01-461d-9aca-89e61881e1d2/nmstate-operator/0.log" Nov 29 08:24:46 crc kubenswrapper[4660]: I1129 08:24:46.284859 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-ds7np_c3aaf1b2-a146-43cd-91ab-8ee65cff6e44/nmstate-webhook/0.log" Nov 29 08:25:03 crc kubenswrapper[4660]: I1129 08:25:03.790757 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cdr7b_fb85aed1-c862-47ce-84e9-e5d44218faff/kube-rbac-proxy/0.log" Nov 29 08:25:03 crc kubenswrapper[4660]: I1129 08:25:03.946803 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cdr7b_fb85aed1-c862-47ce-84e9-e5d44218faff/controller/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.025530 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.240402 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.256263 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.302825 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.308859 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.517779 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.561523 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.568250 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.629349 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.792388 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.832550 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.837928 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:25:04 crc kubenswrapper[4660]: I1129 08:25:04.857768 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/controller/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.016093 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/kube-rbac-proxy/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.084893 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/frr-metrics/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.098275 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/kube-rbac-proxy-frr/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.360320 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/reloader/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.362788 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-pf7m4_2ea3483d-b488-4691-b2f6-3bdb54b0ef49/frr-k8s-webhook-server/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.499670 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.499719 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.631779 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6cfc5c9847-cf8qp_0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87/manager/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.922260 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84c66bf9fd-dsq4c_28d7af7a-86cc-4ceb-bc24-eab722a9813a/webhook-server/0.log" Nov 29 08:25:05 crc kubenswrapper[4660]: I1129 08:25:05.970769 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcx42_ff906a3b-62c0-4073-afaf-67e927a77020/kube-rbac-proxy/0.log" Nov 29 08:25:06 crc kubenswrapper[4660]: I1129 08:25:06.354327 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/frr/0.log" Nov 29 08:25:06 crc kubenswrapper[4660]: I1129 08:25:06.536826 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcx42_ff906a3b-62c0-4073-afaf-67e927a77020/speaker/0.log" Nov 29 08:25:20 crc kubenswrapper[4660]: I1129 08:25:20.292773 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/util/0.log" Nov 29 08:25:20 crc kubenswrapper[4660]: I1129 08:25:20.450319 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/pull/0.log" Nov 29 08:25:20 crc kubenswrapper[4660]: I1129 08:25:20.475914 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/pull/0.log" Nov 29 08:25:20 crc kubenswrapper[4660]: I1129 08:25:20.507976 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/util/0.log" Nov 29 08:25:20 crc kubenswrapper[4660]: I1129 08:25:20.692544 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/pull/0.log" Nov 29 08:25:20 crc kubenswrapper[4660]: I1129 08:25:20.699198 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/extract/0.log" Nov 29 08:25:20 crc kubenswrapper[4660]: I1129 08:25:20.731224 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/util/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.275177 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/util/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.460257 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/pull/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.481711 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/util/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.489473 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/pull/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.659845 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/util/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.660787 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/pull/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.675058 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/extract/0.log" Nov 29 08:25:21 crc kubenswrapper[4660]: I1129 08:25:21.881480 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-utilities/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.051892 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-utilities/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.070293 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-content/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.120421 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-content/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.269263 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-utilities/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.326943 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-content/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.510774 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-utilities/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.873125 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/registry-server/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.882131 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-content/0.log" Nov 29 08:25:22 crc kubenswrapper[4660]: I1129 08:25:22.962482 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-utilities/0.log" Nov 29 08:25:23 crc kubenswrapper[4660]: I1129 08:25:23.016581 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-content/0.log" Nov 29 08:25:23 crc kubenswrapper[4660]: I1129 08:25:23.148811 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-content/0.log" Nov 29 08:25:23 crc kubenswrapper[4660]: I1129 08:25:23.226065 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-utilities/0.log" Nov 29 08:25:23 crc kubenswrapper[4660]: I1129 08:25:23.407664 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/registry-server/0.log" Nov 29 08:25:23 crc kubenswrapper[4660]: I1129 08:25:23.483312 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4msqn_f9482d0d-cad1-43a2-a0f9-523323125ae2/marketplace-operator/0.log" Nov 29 08:25:23 crc kubenswrapper[4660]: I1129 08:25:23.504746 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-utilities/0.log" Nov 29 08:25:23 crc kubenswrapper[4660]: I1129 08:25:23.958183 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-content/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.018028 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-utilities/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.020961 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-content/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.132147 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-utilities/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.207183 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-content/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.298201 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-utilities/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.343515 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/registry-server/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.514475 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-utilities/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.514556 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-content/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.551398 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-content/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.698652 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-utilities/0.log" Nov 29 08:25:24 crc kubenswrapper[4660]: I1129 08:25:24.739255 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-content/0.log" Nov 29 08:25:25 crc kubenswrapper[4660]: I1129 08:25:25.157953 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/registry-server/0.log" Nov 29 08:25:35 crc kubenswrapper[4660]: I1129 08:25:35.500296 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:25:35 crc kubenswrapper[4660]: I1129 08:25:35.500846 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:25:35 crc kubenswrapper[4660]: I1129 08:25:35.500900 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:25:35 crc kubenswrapper[4660]: I1129 08:25:35.501968 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7491638fb3efde4e7767c268a33badef3c42cb700e7526935e1de64c7b71e8a1"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:25:35 crc kubenswrapper[4660]: I1129 08:25:35.502025 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://7491638fb3efde4e7767c268a33badef3c42cb700e7526935e1de64c7b71e8a1" gracePeriod=600 Nov 29 08:25:36 crc kubenswrapper[4660]: I1129 08:25:36.238451 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="7491638fb3efde4e7767c268a33badef3c42cb700e7526935e1de64c7b71e8a1" exitCode=0 Nov 29 08:25:36 crc kubenswrapper[4660]: I1129 08:25:36.238534 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"7491638fb3efde4e7767c268a33badef3c42cb700e7526935e1de64c7b71e8a1"} Nov 29 08:25:36 crc kubenswrapper[4660]: I1129 08:25:36.239267 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671"} Nov 29 08:25:36 crc kubenswrapper[4660]: I1129 08:25:36.239303 4660 scope.go:117] "RemoveContainer" containerID="b6d43c8090af213f2fb0a2b5480bcf104f60884e174cc157a4d7747067ea2f99" Nov 29 08:27:18 crc kubenswrapper[4660]: I1129 08:27:18.274145 4660 generic.go:334] "Generic (PLEG): container finished" podID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerID="fb378b62c6155db125d9f97935d62f38769da551038b7f7a99c4fb024578e785" exitCode=0 Nov 29 08:27:18 crc kubenswrapper[4660]: I1129 08:27:18.274266 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd6qz/must-gather-8j26b" event={"ID":"bd729116-e36e-43fe-bdae-388a4bcb0976","Type":"ContainerDied","Data":"fb378b62c6155db125d9f97935d62f38769da551038b7f7a99c4fb024578e785"} Nov 29 08:27:18 crc kubenswrapper[4660]: I1129 08:27:18.275596 4660 scope.go:117] "RemoveContainer" containerID="fb378b62c6155db125d9f97935d62f38769da551038b7f7a99c4fb024578e785" Nov 29 08:27:18 crc kubenswrapper[4660]: I1129 08:27:18.560125 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd6qz_must-gather-8j26b_bd729116-e36e-43fe-bdae-388a4bcb0976/gather/0.log" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.017483 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd6qz/must-gather-8j26b"] Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.018696 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bd6qz/must-gather-8j26b" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerName="copy" containerID="cri-o://7dfbde036d9fd3ce3598752ea21a1ed62c8a1b3c166c74779bb741cede975038" gracePeriod=2 Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.029144 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd6qz/must-gather-8j26b"] Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.376287 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd6qz_must-gather-8j26b_bd729116-e36e-43fe-bdae-388a4bcb0976/copy/0.log" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.376811 4660 generic.go:334] "Generic (PLEG): container finished" podID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerID="7dfbde036d9fd3ce3598752ea21a1ed62c8a1b3c166c74779bb741cede975038" exitCode=143 Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.376851 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c205367bf04f21dd9dba2ec170bbe5d47ccb49b2217eb1580416e22905bc747" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.459395 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd6qz_must-gather-8j26b_bd729116-e36e-43fe-bdae-388a4bcb0976/copy/0.log" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.460401 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.481971 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd729116-e36e-43fe-bdae-388a4bcb0976-must-gather-output\") pod \"bd729116-e36e-43fe-bdae-388a4bcb0976\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.482287 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65ndx\" (UniqueName: \"kubernetes.io/projected/bd729116-e36e-43fe-bdae-388a4bcb0976-kube-api-access-65ndx\") pod \"bd729116-e36e-43fe-bdae-388a4bcb0976\" (UID: \"bd729116-e36e-43fe-bdae-388a4bcb0976\") " Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.503510 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd729116-e36e-43fe-bdae-388a4bcb0976-kube-api-access-65ndx" (OuterVolumeSpecName: "kube-api-access-65ndx") pod "bd729116-e36e-43fe-bdae-388a4bcb0976" (UID: "bd729116-e36e-43fe-bdae-388a4bcb0976"). InnerVolumeSpecName "kube-api-access-65ndx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.584332 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65ndx\" (UniqueName: \"kubernetes.io/projected/bd729116-e36e-43fe-bdae-388a4bcb0976-kube-api-access-65ndx\") on node \"crc\" DevicePath \"\"" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.646877 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd729116-e36e-43fe-bdae-388a4bcb0976-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bd729116-e36e-43fe-bdae-388a4bcb0976" (UID: "bd729116-e36e-43fe-bdae-388a4bcb0976"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.685697 4660 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd729116-e36e-43fe-bdae-388a4bcb0976-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 08:27:27 crc kubenswrapper[4660]: I1129 08:27:27.704029 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" path="/var/lib/kubelet/pods/bd729116-e36e-43fe-bdae-388a4bcb0976/volumes" Nov 29 08:27:28 crc kubenswrapper[4660]: I1129 08:27:28.383739 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd6qz/must-gather-8j26b" Nov 29 08:27:35 crc kubenswrapper[4660]: I1129 08:27:35.500113 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:27:35 crc kubenswrapper[4660]: I1129 08:27:35.500846 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.831482 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zt657"] Nov 29 08:27:55 crc kubenswrapper[4660]: E1129 08:27:55.832255 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerName="gather" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.832266 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerName="gather" Nov 29 08:27:55 crc kubenswrapper[4660]: E1129 08:27:55.832281 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerName="copy" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.832287 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerName="copy" Nov 29 08:27:55 crc kubenswrapper[4660]: E1129 08:27:55.832318 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2a59b3-36ce-457a-8ee5-d39a762685b5" containerName="container-00" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.832324 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2a59b3-36ce-457a-8ee5-d39a762685b5" containerName="container-00" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.832533 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerName="gather" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.832546 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2a59b3-36ce-457a-8ee5-d39a762685b5" containerName="container-00" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.832558 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd729116-e36e-43fe-bdae-388a4bcb0976" containerName="copy" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.835390 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.907115 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt657"] Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.973342 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjlf\" (UniqueName: \"kubernetes.io/projected/4be20bb3-4aec-427e-8143-9a31d1312ada-kube-api-access-qjjlf\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.973407 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-utilities\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:55 crc kubenswrapper[4660]: I1129 08:27:55.973493 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-catalog-content\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.075632 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-catalog-content\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.075730 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjlf\" (UniqueName: \"kubernetes.io/projected/4be20bb3-4aec-427e-8143-9a31d1312ada-kube-api-access-qjjlf\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.075775 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-utilities\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.076325 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-utilities\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.076562 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-catalog-content\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.097878 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjlf\" (UniqueName: \"kubernetes.io/projected/4be20bb3-4aec-427e-8143-9a31d1312ada-kube-api-access-qjjlf\") pod \"redhat-marketplace-zt657\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.204211 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:27:56 crc kubenswrapper[4660]: I1129 08:27:56.743024 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt657"] Nov 29 08:27:57 crc kubenswrapper[4660]: I1129 08:27:57.664978 4660 generic.go:334] "Generic (PLEG): container finished" podID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerID="30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85" exitCode=0 Nov 29 08:27:57 crc kubenswrapper[4660]: I1129 08:27:57.665061 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt657" event={"ID":"4be20bb3-4aec-427e-8143-9a31d1312ada","Type":"ContainerDied","Data":"30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85"} Nov 29 08:27:57 crc kubenswrapper[4660]: I1129 08:27:57.665266 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt657" event={"ID":"4be20bb3-4aec-427e-8143-9a31d1312ada","Type":"ContainerStarted","Data":"a32e22007d9e6950a31bdf2fcb49fcae01f12211b8a718259edc84ac5333fd3d"} Nov 29 08:27:57 crc kubenswrapper[4660]: I1129 08:27:57.668108 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:27:58 crc kubenswrapper[4660]: I1129 08:27:58.676537 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt657" event={"ID":"4be20bb3-4aec-427e-8143-9a31d1312ada","Type":"ContainerStarted","Data":"eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203"} Nov 29 08:27:59 crc kubenswrapper[4660]: I1129 08:27:59.688242 4660 generic.go:334] "Generic (PLEG): container finished" podID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerID="eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203" exitCode=0 Nov 29 08:27:59 crc kubenswrapper[4660]: I1129 08:27:59.688538 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt657" event={"ID":"4be20bb3-4aec-427e-8143-9a31d1312ada","Type":"ContainerDied","Data":"eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203"} Nov 29 08:28:00 crc kubenswrapper[4660]: I1129 08:28:00.702066 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt657" event={"ID":"4be20bb3-4aec-427e-8143-9a31d1312ada","Type":"ContainerStarted","Data":"d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617"} Nov 29 08:28:00 crc kubenswrapper[4660]: I1129 08:28:00.730347 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zt657" podStartSLOduration=3.265037401 podStartE2EDuration="5.73032487s" podCreationTimestamp="2025-11-29 08:27:55 +0000 UTC" firstStartedPulling="2025-11-29 08:27:57.667789622 +0000 UTC m=+4368.221319531" lastFinishedPulling="2025-11-29 08:28:00.133077071 +0000 UTC m=+4370.686607000" observedRunningTime="2025-11-29 08:28:00.724138613 +0000 UTC m=+4371.277668512" watchObservedRunningTime="2025-11-29 08:28:00.73032487 +0000 UTC m=+4371.283854769" Nov 29 08:28:05 crc kubenswrapper[4660]: I1129 08:28:05.500905 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:28:05 crc kubenswrapper[4660]: I1129 08:28:05.501422 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:28:06 crc kubenswrapper[4660]: I1129 08:28:06.207849 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:28:06 crc kubenswrapper[4660]: I1129 08:28:06.209035 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:28:06 crc kubenswrapper[4660]: I1129 08:28:06.287671 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:28:06 crc kubenswrapper[4660]: I1129 08:28:06.801581 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:28:06 crc kubenswrapper[4660]: I1129 08:28:06.852638 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt657"] Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.629714 4660 scope.go:117] "RemoveContainer" containerID="b6f95c582f9c520c5742e4f7d902159f7f1efce9f3f48b868baabafda7bc6c38" Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.659145 4660 scope.go:117] "RemoveContainer" containerID="89a101894d0a13c9a6c4eef108eee4470f5619aa6a26cec9258371b3e12946f3" Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.689056 4660 scope.go:117] "RemoveContainer" containerID="23b36c62966d685c10bac49bdb385df003283296dbb9e4bd4704b544f73bb6b9" Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.746225 4660 scope.go:117] "RemoveContainer" containerID="d56e5092cd39928d4e44c195fbe7d8c27a5454bcfd59ac5becfc0125bd54095d" Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.821876 4660 scope.go:117] "RemoveContainer" containerID="fb6747fe110c2facb83f0f42423fd4e0700ac1c696597b4d30b4dc7fda51e6ed" Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.842966 4660 scope.go:117] "RemoveContainer" containerID="fb378b62c6155db125d9f97935d62f38769da551038b7f7a99c4fb024578e785" Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.905331 4660 scope.go:117] "RemoveContainer" containerID="7dfbde036d9fd3ce3598752ea21a1ed62c8a1b3c166c74779bb741cede975038" Nov 29 08:28:07 crc kubenswrapper[4660]: I1129 08:28:07.935531 4660 scope.go:117] "RemoveContainer" containerID="f397edeabb2ab7fbb8268f920f8ad2c950dee6f825d61c5bfe27f1bf593c5257" Nov 29 08:28:08 crc kubenswrapper[4660]: I1129 08:28:08.820268 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zt657" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="registry-server" containerID="cri-o://d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617" gracePeriod=2 Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.783931 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.834219 4660 generic.go:334] "Generic (PLEG): container finished" podID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerID="d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617" exitCode=0 Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.834265 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt657" event={"ID":"4be20bb3-4aec-427e-8143-9a31d1312ada","Type":"ContainerDied","Data":"d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617"} Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.834365 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt657" event={"ID":"4be20bb3-4aec-427e-8143-9a31d1312ada","Type":"ContainerDied","Data":"a32e22007d9e6950a31bdf2fcb49fcae01f12211b8a718259edc84ac5333fd3d"} Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.834391 4660 scope.go:117] "RemoveContainer" containerID="d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.834578 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt657" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.857703 4660 scope.go:117] "RemoveContainer" containerID="eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.860669 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-catalog-content\") pod \"4be20bb3-4aec-427e-8143-9a31d1312ada\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.860856 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjjlf\" (UniqueName: \"kubernetes.io/projected/4be20bb3-4aec-427e-8143-9a31d1312ada-kube-api-access-qjjlf\") pod \"4be20bb3-4aec-427e-8143-9a31d1312ada\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.860949 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-utilities\") pod \"4be20bb3-4aec-427e-8143-9a31d1312ada\" (UID: \"4be20bb3-4aec-427e-8143-9a31d1312ada\") " Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.866928 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-utilities" (OuterVolumeSpecName: "utilities") pod "4be20bb3-4aec-427e-8143-9a31d1312ada" (UID: "4be20bb3-4aec-427e-8143-9a31d1312ada"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.868785 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be20bb3-4aec-427e-8143-9a31d1312ada-kube-api-access-qjjlf" (OuterVolumeSpecName: "kube-api-access-qjjlf") pod "4be20bb3-4aec-427e-8143-9a31d1312ada" (UID: "4be20bb3-4aec-427e-8143-9a31d1312ada"). InnerVolumeSpecName "kube-api-access-qjjlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.881521 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4be20bb3-4aec-427e-8143-9a31d1312ada" (UID: "4be20bb3-4aec-427e-8143-9a31d1312ada"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.894139 4660 scope.go:117] "RemoveContainer" containerID="30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.958001 4660 scope.go:117] "RemoveContainer" containerID="d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617" Nov 29 08:28:09 crc kubenswrapper[4660]: E1129 08:28:09.958391 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617\": container with ID starting with d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617 not found: ID does not exist" containerID="d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.958423 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617"} err="failed to get container status \"d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617\": rpc error: code = NotFound desc = could not find container \"d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617\": container with ID starting with d7390091d922a756b4aaddd19d8bd6a0a81999a62687b2b68a4b58b0bc2f9617 not found: ID does not exist" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.958443 4660 scope.go:117] "RemoveContainer" containerID="eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203" Nov 29 08:28:09 crc kubenswrapper[4660]: E1129 08:28:09.958768 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203\": container with ID starting with eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203 not found: ID does not exist" containerID="eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.958792 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203"} err="failed to get container status \"eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203\": rpc error: code = NotFound desc = could not find container \"eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203\": container with ID starting with eb3e6e383c134935458e23e639e7d5240af0330b56d47f56284e8ca499da2203 not found: ID does not exist" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.958806 4660 scope.go:117] "RemoveContainer" containerID="30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85" Nov 29 08:28:09 crc kubenswrapper[4660]: E1129 08:28:09.959260 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85\": container with ID starting with 30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85 not found: ID does not exist" containerID="30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.959365 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85"} err="failed to get container status \"30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85\": rpc error: code = NotFound desc = could not find container \"30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85\": container with ID starting with 30690bf6d8909c2192893c654d5bf49f67939be605ab14371ba43aeeeeb1de85 not found: ID does not exist" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.962908 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.962940 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjjlf\" (UniqueName: \"kubernetes.io/projected/4be20bb3-4aec-427e-8143-9a31d1312ada-kube-api-access-qjjlf\") on node \"crc\" DevicePath \"\"" Nov 29 08:28:09 crc kubenswrapper[4660]: I1129 08:28:09.962957 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4be20bb3-4aec-427e-8143-9a31d1312ada-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:28:10 crc kubenswrapper[4660]: I1129 08:28:10.168343 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt657"] Nov 29 08:28:10 crc kubenswrapper[4660]: I1129 08:28:10.176760 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt657"] Nov 29 08:28:11 crc kubenswrapper[4660]: I1129 08:28:11.706629 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" path="/var/lib/kubelet/pods/4be20bb3-4aec-427e-8143-9a31d1312ada/volumes" Nov 29 08:28:35 crc kubenswrapper[4660]: I1129 08:28:35.500266 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:28:35 crc kubenswrapper[4660]: I1129 08:28:35.500836 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:28:35 crc kubenswrapper[4660]: I1129 08:28:35.500884 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:28:35 crc kubenswrapper[4660]: I1129 08:28:35.501669 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:28:35 crc kubenswrapper[4660]: I1129 08:28:35.501729 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" gracePeriod=600 Nov 29 08:28:35 crc kubenswrapper[4660]: E1129 08:28:35.631846 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:28:36 crc kubenswrapper[4660]: I1129 08:28:36.087948 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" exitCode=0 Nov 29 08:28:36 crc kubenswrapper[4660]: I1129 08:28:36.088024 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671"} Nov 29 08:28:36 crc kubenswrapper[4660]: I1129 08:28:36.088372 4660 scope.go:117] "RemoveContainer" containerID="7491638fb3efde4e7767c268a33badef3c42cb700e7526935e1de64c7b71e8a1" Nov 29 08:28:36 crc kubenswrapper[4660]: I1129 08:28:36.089173 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:28:36 crc kubenswrapper[4660]: E1129 08:28:36.090739 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:28:49 crc kubenswrapper[4660]: I1129 08:28:49.705298 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:28:49 crc kubenswrapper[4660]: E1129 08:28:49.706084 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:29:04 crc kubenswrapper[4660]: I1129 08:29:04.694333 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:29:04 crc kubenswrapper[4660]: E1129 08:29:04.695235 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:29:16 crc kubenswrapper[4660]: I1129 08:29:16.694155 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:29:16 crc kubenswrapper[4660]: E1129 08:29:16.695242 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:29:29 crc kubenswrapper[4660]: I1129 08:29:29.699533 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:29:29 crc kubenswrapper[4660]: E1129 08:29:29.700252 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:29:40 crc kubenswrapper[4660]: I1129 08:29:40.693766 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:29:40 crc kubenswrapper[4660]: E1129 08:29:40.696500 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:29:53 crc kubenswrapper[4660]: I1129 08:29:53.694178 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:29:53 crc kubenswrapper[4660]: E1129 08:29:53.695113 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.201908 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv"] Nov 29 08:30:00 crc kubenswrapper[4660]: E1129 08:30:00.203932 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="registry-server" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.204022 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="registry-server" Nov 29 08:30:00 crc kubenswrapper[4660]: E1129 08:30:00.204084 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="extract-utilities" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.204146 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="extract-utilities" Nov 29 08:30:00 crc kubenswrapper[4660]: E1129 08:30:00.204243 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="extract-content" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.204308 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="extract-content" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.204533 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be20bb3-4aec-427e-8143-9a31d1312ada" containerName="registry-server" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.205228 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.209571 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.209574 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.235867 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv"] Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.294638 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a63c76dc-a735-4d40-8b27-0157e656cdac-secret-volume\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.294700 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a63c76dc-a735-4d40-8b27-0157e656cdac-config-volume\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.294991 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvwg\" (UniqueName: \"kubernetes.io/projected/a63c76dc-a735-4d40-8b27-0157e656cdac-kube-api-access-qfvwg\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.396430 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvwg\" (UniqueName: \"kubernetes.io/projected/a63c76dc-a735-4d40-8b27-0157e656cdac-kube-api-access-qfvwg\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.396503 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a63c76dc-a735-4d40-8b27-0157e656cdac-secret-volume\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.396531 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a63c76dc-a735-4d40-8b27-0157e656cdac-config-volume\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.397404 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a63c76dc-a735-4d40-8b27-0157e656cdac-config-volume\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.403421 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a63c76dc-a735-4d40-8b27-0157e656cdac-secret-volume\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.412182 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvwg\" (UniqueName: \"kubernetes.io/projected/a63c76dc-a735-4d40-8b27-0157e656cdac-kube-api-access-qfvwg\") pod \"collect-profiles-29406750-7s5wv\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:00 crc kubenswrapper[4660]: I1129 08:30:00.537361 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:01 crc kubenswrapper[4660]: I1129 08:30:01.048782 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv"] Nov 29 08:30:01 crc kubenswrapper[4660]: I1129 08:30:01.890298 4660 generic.go:334] "Generic (PLEG): container finished" podID="a63c76dc-a735-4d40-8b27-0157e656cdac" containerID="9f4bb93c934b485bab99191b24ead62a83ea5b132f4d14278cc4f0c172234f89" exitCode=0 Nov 29 08:30:01 crc kubenswrapper[4660]: I1129 08:30:01.890368 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" event={"ID":"a63c76dc-a735-4d40-8b27-0157e656cdac","Type":"ContainerDied","Data":"9f4bb93c934b485bab99191b24ead62a83ea5b132f4d14278cc4f0c172234f89"} Nov 29 08:30:01 crc kubenswrapper[4660]: I1129 08:30:01.890594 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" event={"ID":"a63c76dc-a735-4d40-8b27-0157e656cdac","Type":"ContainerStarted","Data":"bd32673955c82b35029f4c865e4041d6c2939d0adcf7b11196a455aae5acdf00"} Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.830592 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.869754 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a63c76dc-a735-4d40-8b27-0157e656cdac-secret-volume\") pod \"a63c76dc-a735-4d40-8b27-0157e656cdac\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.869861 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfvwg\" (UniqueName: \"kubernetes.io/projected/a63c76dc-a735-4d40-8b27-0157e656cdac-kube-api-access-qfvwg\") pod \"a63c76dc-a735-4d40-8b27-0157e656cdac\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.869941 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a63c76dc-a735-4d40-8b27-0157e656cdac-config-volume\") pod \"a63c76dc-a735-4d40-8b27-0157e656cdac\" (UID: \"a63c76dc-a735-4d40-8b27-0157e656cdac\") " Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.871432 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63c76dc-a735-4d40-8b27-0157e656cdac-config-volume" (OuterVolumeSpecName: "config-volume") pod "a63c76dc-a735-4d40-8b27-0157e656cdac" (UID: "a63c76dc-a735-4d40-8b27-0157e656cdac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.880066 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63c76dc-a735-4d40-8b27-0157e656cdac-kube-api-access-qfvwg" (OuterVolumeSpecName: "kube-api-access-qfvwg") pod "a63c76dc-a735-4d40-8b27-0157e656cdac" (UID: "a63c76dc-a735-4d40-8b27-0157e656cdac"). InnerVolumeSpecName "kube-api-access-qfvwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.880898 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a63c76dc-a735-4d40-8b27-0157e656cdac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a63c76dc-a735-4d40-8b27-0157e656cdac" (UID: "a63c76dc-a735-4d40-8b27-0157e656cdac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.910952 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" event={"ID":"a63c76dc-a735-4d40-8b27-0157e656cdac","Type":"ContainerDied","Data":"bd32673955c82b35029f4c865e4041d6c2939d0adcf7b11196a455aae5acdf00"} Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.911311 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd32673955c82b35029f4c865e4041d6c2939d0adcf7b11196a455aae5acdf00" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.911277 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406750-7s5wv" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.973146 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a63c76dc-a735-4d40-8b27-0157e656cdac-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.973527 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfvwg\" (UniqueName: \"kubernetes.io/projected/a63c76dc-a735-4d40-8b27-0157e656cdac-kube-api-access-qfvwg\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:03 crc kubenswrapper[4660]: I1129 08:30:03.973627 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a63c76dc-a735-4d40-8b27-0157e656cdac-config-volume\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:04 crc kubenswrapper[4660]: I1129 08:30:04.917549 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk"] Nov 29 08:30:04 crc kubenswrapper[4660]: I1129 08:30:04.924989 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406705-75gdk"] Nov 29 08:30:05 crc kubenswrapper[4660]: I1129 08:30:05.693929 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:30:05 crc kubenswrapper[4660]: E1129 08:30:05.694477 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:30:05 crc kubenswrapper[4660]: I1129 08:30:05.708019 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a07cb2d-b206-422c-be7d-1e2952fb7a96" path="/var/lib/kubelet/pods/1a07cb2d-b206-422c-be7d-1e2952fb7a96/volumes" Nov 29 08:30:08 crc kubenswrapper[4660]: I1129 08:30:08.081775 4660 scope.go:117] "RemoveContainer" containerID="c7490b39df81a89729eb9d73a877a4abee3c03c22e5751b2e41fdb4cffd1dea2" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.480560 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nbmbp/must-gather-phc4j"] Nov 29 08:30:12 crc kubenswrapper[4660]: E1129 08:30:12.481373 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63c76dc-a735-4d40-8b27-0157e656cdac" containerName="collect-profiles" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.481385 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63c76dc-a735-4d40-8b27-0157e656cdac" containerName="collect-profiles" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.481554 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63c76dc-a735-4d40-8b27-0157e656cdac" containerName="collect-profiles" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.482468 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.484731 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nbmbp"/"default-dockercfg-7z6zf" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.484961 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nbmbp"/"kube-root-ca.crt" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.493316 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nbmbp"/"openshift-service-ca.crt" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.526690 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nbmbp/must-gather-phc4j"] Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.584432 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2fls\" (UniqueName: \"kubernetes.io/projected/9126939e-659d-4706-bbab-3ddb63a1be16-kube-api-access-b2fls\") pod \"must-gather-phc4j\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.584598 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9126939e-659d-4706-bbab-3ddb63a1be16-must-gather-output\") pod \"must-gather-phc4j\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.686691 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2fls\" (UniqueName: \"kubernetes.io/projected/9126939e-659d-4706-bbab-3ddb63a1be16-kube-api-access-b2fls\") pod \"must-gather-phc4j\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.687039 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9126939e-659d-4706-bbab-3ddb63a1be16-must-gather-output\") pod \"must-gather-phc4j\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.687405 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9126939e-659d-4706-bbab-3ddb63a1be16-must-gather-output\") pod \"must-gather-phc4j\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.704796 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2fls\" (UniqueName: \"kubernetes.io/projected/9126939e-659d-4706-bbab-3ddb63a1be16-kube-api-access-b2fls\") pod \"must-gather-phc4j\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:12 crc kubenswrapper[4660]: I1129 08:30:12.798377 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:30:13 crc kubenswrapper[4660]: I1129 08:30:13.896550 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nbmbp/must-gather-phc4j"] Nov 29 08:30:15 crc kubenswrapper[4660]: I1129 08:30:15.016881 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/must-gather-phc4j" event={"ID":"9126939e-659d-4706-bbab-3ddb63a1be16","Type":"ContainerStarted","Data":"0b60fb27dab3803b540d56a2c692d61e0d8079b85003e7e8437f868abdc65913"} Nov 29 08:30:15 crc kubenswrapper[4660]: I1129 08:30:15.017198 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/must-gather-phc4j" event={"ID":"9126939e-659d-4706-bbab-3ddb63a1be16","Type":"ContainerStarted","Data":"29a90d9f8ab6f6e5fdd6909a74259035b5b8835a518cac50cd13a4a47c669545"} Nov 29 08:30:15 crc kubenswrapper[4660]: I1129 08:30:15.017209 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/must-gather-phc4j" event={"ID":"9126939e-659d-4706-bbab-3ddb63a1be16","Type":"ContainerStarted","Data":"b7b2217acebd40f4d7aed7dceaf1a0d35020f33a771dfa3564a11a2fea39e182"} Nov 29 08:30:15 crc kubenswrapper[4660]: I1129 08:30:15.037342 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nbmbp/must-gather-phc4j" podStartSLOduration=3.037326313 podStartE2EDuration="3.037326313s" podCreationTimestamp="2025-11-29 08:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:30:15.032376088 +0000 UTC m=+4505.585906017" watchObservedRunningTime="2025-11-29 08:30:15.037326313 +0000 UTC m=+4505.590856202" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.482045 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-95pv2"] Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.483653 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.493898 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkfkv\" (UniqueName: \"kubernetes.io/projected/93472056-d803-4adb-96c7-43053eeee332-kube-api-access-fkfkv\") pod \"crc-debug-95pv2\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.494212 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93472056-d803-4adb-96c7-43053eeee332-host\") pod \"crc-debug-95pv2\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.595993 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkfkv\" (UniqueName: \"kubernetes.io/projected/93472056-d803-4adb-96c7-43053eeee332-kube-api-access-fkfkv\") pod \"crc-debug-95pv2\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.596103 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93472056-d803-4adb-96c7-43053eeee332-host\") pod \"crc-debug-95pv2\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.596193 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93472056-d803-4adb-96c7-43053eeee332-host\") pod \"crc-debug-95pv2\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.627662 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkfkv\" (UniqueName: \"kubernetes.io/projected/93472056-d803-4adb-96c7-43053eeee332-kube-api-access-fkfkv\") pod \"crc-debug-95pv2\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.693262 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:30:18 crc kubenswrapper[4660]: E1129 08:30:18.693503 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:30:18 crc kubenswrapper[4660]: I1129 08:30:18.800676 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:18 crc kubenswrapper[4660]: W1129 08:30:18.844728 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93472056_d803_4adb_96c7_43053eeee332.slice/crio-a999cf9e5e324900da93f70415bf36730ca378681cfafd27da20cf4f46eee031 WatchSource:0}: Error finding container a999cf9e5e324900da93f70415bf36730ca378681cfafd27da20cf4f46eee031: Status 404 returned error can't find the container with id a999cf9e5e324900da93f70415bf36730ca378681cfafd27da20cf4f46eee031 Nov 29 08:30:19 crc kubenswrapper[4660]: I1129 08:30:19.052578 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" event={"ID":"93472056-d803-4adb-96c7-43053eeee332","Type":"ContainerStarted","Data":"3dfec153723dc34679c9e250a6cf0bb4c656b0d4bcdf1ccb3377d80be6ae0202"} Nov 29 08:30:19 crc kubenswrapper[4660]: I1129 08:30:19.052947 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" event={"ID":"93472056-d803-4adb-96c7-43053eeee332","Type":"ContainerStarted","Data":"a999cf9e5e324900da93f70415bf36730ca378681cfafd27da20cf4f46eee031"} Nov 29 08:30:19 crc kubenswrapper[4660]: I1129 08:30:19.075815 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" podStartSLOduration=1.07579482 podStartE2EDuration="1.07579482s" podCreationTimestamp="2025-11-29 08:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 08:30:19.073335623 +0000 UTC m=+4509.626865522" watchObservedRunningTime="2025-11-29 08:30:19.07579482 +0000 UTC m=+4509.629324719" Nov 29 08:30:32 crc kubenswrapper[4660]: I1129 08:30:32.694862 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:30:32 crc kubenswrapper[4660]: E1129 08:30:32.695652 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:30:46 crc kubenswrapper[4660]: I1129 08:30:46.694690 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:30:46 crc kubenswrapper[4660]: E1129 08:30:46.695450 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:30:51 crc kubenswrapper[4660]: I1129 08:30:51.334722 4660 generic.go:334] "Generic (PLEG): container finished" podID="93472056-d803-4adb-96c7-43053eeee332" containerID="3dfec153723dc34679c9e250a6cf0bb4c656b0d4bcdf1ccb3377d80be6ae0202" exitCode=0 Nov 29 08:30:51 crc kubenswrapper[4660]: I1129 08:30:51.334814 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" event={"ID":"93472056-d803-4adb-96c7-43053eeee332","Type":"ContainerDied","Data":"3dfec153723dc34679c9e250a6cf0bb4c656b0d4bcdf1ccb3377d80be6ae0202"} Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.461253 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.489980 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-95pv2"] Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.497499 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-95pv2"] Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.549135 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkfkv\" (UniqueName: \"kubernetes.io/projected/93472056-d803-4adb-96c7-43053eeee332-kube-api-access-fkfkv\") pod \"93472056-d803-4adb-96c7-43053eeee332\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.549311 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93472056-d803-4adb-96c7-43053eeee332-host\") pod \"93472056-d803-4adb-96c7-43053eeee332\" (UID: \"93472056-d803-4adb-96c7-43053eeee332\") " Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.549859 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93472056-d803-4adb-96c7-43053eeee332-host" (OuterVolumeSpecName: "host") pod "93472056-d803-4adb-96c7-43053eeee332" (UID: "93472056-d803-4adb-96c7-43053eeee332"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.550100 4660 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93472056-d803-4adb-96c7-43053eeee332-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.566197 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93472056-d803-4adb-96c7-43053eeee332-kube-api-access-fkfkv" (OuterVolumeSpecName: "kube-api-access-fkfkv") pod "93472056-d803-4adb-96c7-43053eeee332" (UID: "93472056-d803-4adb-96c7-43053eeee332"). InnerVolumeSpecName "kube-api-access-fkfkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:30:52 crc kubenswrapper[4660]: I1129 08:30:52.651168 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkfkv\" (UniqueName: \"kubernetes.io/projected/93472056-d803-4adb-96c7-43053eeee332-kube-api-access-fkfkv\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.357409 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a999cf9e5e324900da93f70415bf36730ca378681cfafd27da20cf4f46eee031" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.357473 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-95pv2" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.704822 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93472056-d803-4adb-96c7-43053eeee332" path="/var/lib/kubelet/pods/93472056-d803-4adb-96c7-43053eeee332/volumes" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.709707 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-jr8zr"] Nov 29 08:30:53 crc kubenswrapper[4660]: E1129 08:30:53.710143 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93472056-d803-4adb-96c7-43053eeee332" containerName="container-00" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.710165 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="93472056-d803-4adb-96c7-43053eeee332" containerName="container-00" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.710349 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="93472056-d803-4adb-96c7-43053eeee332" containerName="container-00" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.710967 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.771135 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3f7b755-03b4-47cd-9767-8795d1c7cecc-host\") pod \"crc-debug-jr8zr\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.771296 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxnlq\" (UniqueName: \"kubernetes.io/projected/d3f7b755-03b4-47cd-9767-8795d1c7cecc-kube-api-access-xxnlq\") pod \"crc-debug-jr8zr\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.872175 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3f7b755-03b4-47cd-9767-8795d1c7cecc-host\") pod \"crc-debug-jr8zr\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.872283 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxnlq\" (UniqueName: \"kubernetes.io/projected/d3f7b755-03b4-47cd-9767-8795d1c7cecc-kube-api-access-xxnlq\") pod \"crc-debug-jr8zr\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.872865 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3f7b755-03b4-47cd-9767-8795d1c7cecc-host\") pod \"crc-debug-jr8zr\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:53 crc kubenswrapper[4660]: I1129 08:30:53.891691 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxnlq\" (UniqueName: \"kubernetes.io/projected/d3f7b755-03b4-47cd-9767-8795d1c7cecc-kube-api-access-xxnlq\") pod \"crc-debug-jr8zr\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:54 crc kubenswrapper[4660]: I1129 08:30:54.031182 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:54 crc kubenswrapper[4660]: I1129 08:30:54.370196 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" event={"ID":"d3f7b755-03b4-47cd-9767-8795d1c7cecc","Type":"ContainerStarted","Data":"1d18643348fe92b9014437cfe7ff18fb0161144717e7ddcc749320ceaa355849"} Nov 29 08:30:54 crc kubenswrapper[4660]: I1129 08:30:54.370495 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" event={"ID":"d3f7b755-03b4-47cd-9767-8795d1c7cecc","Type":"ContainerStarted","Data":"62ac90659673b6e38bc8ae0b1713f1eb6ed089641f3dc8097dcd4127242a3a8d"} Nov 29 08:30:54 crc kubenswrapper[4660]: I1129 08:30:54.762569 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-jr8zr"] Nov 29 08:30:54 crc kubenswrapper[4660]: I1129 08:30:54.779496 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-jr8zr"] Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.382930 4660 generic.go:334] "Generic (PLEG): container finished" podID="d3f7b755-03b4-47cd-9767-8795d1c7cecc" containerID="1d18643348fe92b9014437cfe7ff18fb0161144717e7ddcc749320ceaa355849" exitCode=0 Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.485095 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.602632 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxnlq\" (UniqueName: \"kubernetes.io/projected/d3f7b755-03b4-47cd-9767-8795d1c7cecc-kube-api-access-xxnlq\") pod \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.602870 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3f7b755-03b4-47cd-9767-8795d1c7cecc-host\") pod \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\" (UID: \"d3f7b755-03b4-47cd-9767-8795d1c7cecc\") " Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.602955 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3f7b755-03b4-47cd-9767-8795d1c7cecc-host" (OuterVolumeSpecName: "host") pod "d3f7b755-03b4-47cd-9767-8795d1c7cecc" (UID: "d3f7b755-03b4-47cd-9767-8795d1c7cecc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.603284 4660 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3f7b755-03b4-47cd-9767-8795d1c7cecc-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.607444 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f7b755-03b4-47cd-9767-8795d1c7cecc-kube-api-access-xxnlq" (OuterVolumeSpecName: "kube-api-access-xxnlq") pod "d3f7b755-03b4-47cd-9767-8795d1c7cecc" (UID: "d3f7b755-03b4-47cd-9767-8795d1c7cecc"). InnerVolumeSpecName "kube-api-access-xxnlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.704368 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f7b755-03b4-47cd-9767-8795d1c7cecc" path="/var/lib/kubelet/pods/d3f7b755-03b4-47cd-9767-8795d1c7cecc/volumes" Nov 29 08:30:55 crc kubenswrapper[4660]: I1129 08:30:55.705415 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxnlq\" (UniqueName: \"kubernetes.io/projected/d3f7b755-03b4-47cd-9767-8795d1c7cecc-kube-api-access-xxnlq\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.145728 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-mww7h"] Nov 29 08:30:56 crc kubenswrapper[4660]: E1129 08:30:56.146132 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f7b755-03b4-47cd-9767-8795d1c7cecc" containerName="container-00" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.146147 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f7b755-03b4-47cd-9767-8795d1c7cecc" containerName="container-00" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.146360 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f7b755-03b4-47cd-9767-8795d1c7cecc" containerName="container-00" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.147070 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.318526 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g86hg\" (UniqueName: \"kubernetes.io/projected/21e4e2c6-8439-48cc-b63a-4549783a4e67-kube-api-access-g86hg\") pod \"crc-debug-mww7h\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.318710 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21e4e2c6-8439-48cc-b63a-4549783a4e67-host\") pod \"crc-debug-mww7h\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.393288 4660 scope.go:117] "RemoveContainer" containerID="1d18643348fe92b9014437cfe7ff18fb0161144717e7ddcc749320ceaa355849" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.393359 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-jr8zr" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.420694 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21e4e2c6-8439-48cc-b63a-4549783a4e67-host\") pod \"crc-debug-mww7h\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.420864 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g86hg\" (UniqueName: \"kubernetes.io/projected/21e4e2c6-8439-48cc-b63a-4549783a4e67-kube-api-access-g86hg\") pod \"crc-debug-mww7h\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.421251 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21e4e2c6-8439-48cc-b63a-4549783a4e67-host\") pod \"crc-debug-mww7h\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.437363 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g86hg\" (UniqueName: \"kubernetes.io/projected/21e4e2c6-8439-48cc-b63a-4549783a4e67-kube-api-access-g86hg\") pod \"crc-debug-mww7h\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: I1129 08:30:56.466961 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:56 crc kubenswrapper[4660]: W1129 08:30:56.494267 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21e4e2c6_8439_48cc_b63a_4549783a4e67.slice/crio-e9e21f1b84d82147774998b87ce11df90258f55c505dab4efead4acd1b791db8 WatchSource:0}: Error finding container e9e21f1b84d82147774998b87ce11df90258f55c505dab4efead4acd1b791db8: Status 404 returned error can't find the container with id e9e21f1b84d82147774998b87ce11df90258f55c505dab4efead4acd1b791db8 Nov 29 08:30:57 crc kubenswrapper[4660]: I1129 08:30:57.403272 4660 generic.go:334] "Generic (PLEG): container finished" podID="21e4e2c6-8439-48cc-b63a-4549783a4e67" containerID="fbf00776a1a9a4ac690b19a1808244be57d7f5782e26567e83f0106b8075c728" exitCode=0 Nov 29 08:30:57 crc kubenswrapper[4660]: I1129 08:30:57.403432 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/crc-debug-mww7h" event={"ID":"21e4e2c6-8439-48cc-b63a-4549783a4e67","Type":"ContainerDied","Data":"fbf00776a1a9a4ac690b19a1808244be57d7f5782e26567e83f0106b8075c728"} Nov 29 08:30:57 crc kubenswrapper[4660]: I1129 08:30:57.404518 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/crc-debug-mww7h" event={"ID":"21e4e2c6-8439-48cc-b63a-4549783a4e67","Type":"ContainerStarted","Data":"e9e21f1b84d82147774998b87ce11df90258f55c505dab4efead4acd1b791db8"} Nov 29 08:30:57 crc kubenswrapper[4660]: I1129 08:30:57.448173 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-mww7h"] Nov 29 08:30:57 crc kubenswrapper[4660]: I1129 08:30:57.460017 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nbmbp/crc-debug-mww7h"] Nov 29 08:30:57 crc kubenswrapper[4660]: I1129 08:30:57.696213 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:30:57 crc kubenswrapper[4660]: E1129 08:30:57.696949 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:30:58 crc kubenswrapper[4660]: I1129 08:30:58.509143 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:58 crc kubenswrapper[4660]: I1129 08:30:58.660102 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21e4e2c6-8439-48cc-b63a-4549783a4e67-host\") pod \"21e4e2c6-8439-48cc-b63a-4549783a4e67\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " Nov 29 08:30:58 crc kubenswrapper[4660]: I1129 08:30:58.660391 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g86hg\" (UniqueName: \"kubernetes.io/projected/21e4e2c6-8439-48cc-b63a-4549783a4e67-kube-api-access-g86hg\") pod \"21e4e2c6-8439-48cc-b63a-4549783a4e67\" (UID: \"21e4e2c6-8439-48cc-b63a-4549783a4e67\") " Nov 29 08:30:58 crc kubenswrapper[4660]: I1129 08:30:58.660232 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e4e2c6-8439-48cc-b63a-4549783a4e67-host" (OuterVolumeSpecName: "host") pod "21e4e2c6-8439-48cc-b63a-4549783a4e67" (UID: "21e4e2c6-8439-48cc-b63a-4549783a4e67"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 29 08:30:58 crc kubenswrapper[4660]: I1129 08:30:58.661245 4660 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21e4e2c6-8439-48cc-b63a-4549783a4e67-host\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:58 crc kubenswrapper[4660]: I1129 08:30:58.665940 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e4e2c6-8439-48cc-b63a-4549783a4e67-kube-api-access-g86hg" (OuterVolumeSpecName: "kube-api-access-g86hg") pod "21e4e2c6-8439-48cc-b63a-4549783a4e67" (UID: "21e4e2c6-8439-48cc-b63a-4549783a4e67"). InnerVolumeSpecName "kube-api-access-g86hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:30:58 crc kubenswrapper[4660]: I1129 08:30:58.763266 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g86hg\" (UniqueName: \"kubernetes.io/projected/21e4e2c6-8439-48cc-b63a-4549783a4e67-kube-api-access-g86hg\") on node \"crc\" DevicePath \"\"" Nov 29 08:30:59 crc kubenswrapper[4660]: I1129 08:30:59.423792 4660 scope.go:117] "RemoveContainer" containerID="fbf00776a1a9a4ac690b19a1808244be57d7f5782e26567e83f0106b8075c728" Nov 29 08:30:59 crc kubenswrapper[4660]: I1129 08:30:59.423826 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/crc-debug-mww7h" Nov 29 08:30:59 crc kubenswrapper[4660]: I1129 08:30:59.709115 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e4e2c6-8439-48cc-b63a-4549783a4e67" path="/var/lib/kubelet/pods/21e4e2c6-8439-48cc-b63a-4549783a4e67/volumes" Nov 29 08:31:08 crc kubenswrapper[4660]: I1129 08:31:08.693544 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:31:08 crc kubenswrapper[4660]: E1129 08:31:08.694528 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:31:23 crc kubenswrapper[4660]: I1129 08:31:23.693718 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:31:23 crc kubenswrapper[4660]: E1129 08:31:23.694345 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:31:32 crc kubenswrapper[4660]: I1129 08:31:32.089239 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-795c6b768d-rnj8x_f92699d7-37a0-4093-81b8-ddb680ca5263/barbican-api/0.log" Nov 29 08:31:32 crc kubenswrapper[4660]: I1129 08:31:32.392751 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65db494558-68jff_25ca5104-7d38-40bc-aa55-19dbd28b40f3/barbican-keystone-listener/0.log" Nov 29 08:31:32 crc kubenswrapper[4660]: I1129 08:31:32.637997 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65db494558-68jff_25ca5104-7d38-40bc-aa55-19dbd28b40f3/barbican-keystone-listener-log/0.log" Nov 29 08:31:32 crc kubenswrapper[4660]: I1129 08:31:32.650123 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-795c6b768d-rnj8x_f92699d7-37a0-4093-81b8-ddb680ca5263/barbican-api-log/0.log" Nov 29 08:31:32 crc kubenswrapper[4660]: I1129 08:31:32.701880 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f844c8dbc-j8g6j_b1f5216f-e274-4987-b2cc-98effb9661eb/barbican-worker/0.log" Nov 29 08:31:32 crc kubenswrapper[4660]: I1129 08:31:32.911709 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f844c8dbc-j8g6j_b1f5216f-e274-4987-b2cc-98effb9661eb/barbican-worker-log/0.log" Nov 29 08:31:32 crc kubenswrapper[4660]: I1129 08:31:32.917003 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rrnhh_92f06c4a-45f4-4542-b502-210d08515f70/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:33 crc kubenswrapper[4660]: I1129 08:31:33.467389 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/proxy-httpd/0.log" Nov 29 08:31:33 crc kubenswrapper[4660]: I1129 08:31:33.562797 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/ceilometer-notification-agent/0.log" Nov 29 08:31:33 crc kubenswrapper[4660]: I1129 08:31:33.572629 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/sg-core/0.log" Nov 29 08:31:33 crc kubenswrapper[4660]: I1129 08:31:33.612529 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_764276f9-3bdf-4936-a57f-dc98650de4b7/ceilometer-central-agent/0.log" Nov 29 08:31:33 crc kubenswrapper[4660]: I1129 08:31:33.782421 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b2bdc67-626d-4aa5-94ff-d413be98dc7c/cinder-api/0.log" Nov 29 08:31:33 crc kubenswrapper[4660]: I1129 08:31:33.827373 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b2bdc67-626d-4aa5-94ff-d413be98dc7c/cinder-api-log/0.log" Nov 29 08:31:33 crc kubenswrapper[4660]: I1129 08:31:33.966283 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b255ded3-2849-4f46-bb45-5c2485862b55/cinder-scheduler/0.log" Nov 29 08:31:34 crc kubenswrapper[4660]: I1129 08:31:34.038377 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b255ded3-2849-4f46-bb45-5c2485862b55/probe/0.log" Nov 29 08:31:34 crc kubenswrapper[4660]: I1129 08:31:34.072048 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-v9bjz_8d0ffb5c-54ae-48a8-9448-7b78f45814a7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:34 crc kubenswrapper[4660]: I1129 08:31:34.241355 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5kwc9_b6e39886-2df6-4257-babe-441252581041/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:34 crc kubenswrapper[4660]: I1129 08:31:34.384743 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-wl6mp_f13d98c7-68bf-4e21-936e-115f586f1dff/init/0.log" Nov 29 08:31:34 crc kubenswrapper[4660]: I1129 08:31:34.693814 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:31:34 crc kubenswrapper[4660]: E1129 08:31:34.694565 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:31:34 crc kubenswrapper[4660]: I1129 08:31:34.979418 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-wl6mp_f13d98c7-68bf-4e21-936e-115f586f1dff/init/0.log" Nov 29 08:31:35 crc kubenswrapper[4660]: I1129 08:31:35.078903 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-wl6mp_f13d98c7-68bf-4e21-936e-115f586f1dff/dnsmasq-dns/0.log" Nov 29 08:31:35 crc kubenswrapper[4660]: I1129 08:31:35.102332 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5895t_c2698bcc-7e72-4b53-8bbf-9d71b4720148/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:35 crc kubenswrapper[4660]: I1129 08:31:35.472028 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e45b487-ff42-480a-a6a2-803949758e7a/glance-log/0.log" Nov 29 08:31:35 crc kubenswrapper[4660]: I1129 08:31:35.582103 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e45b487-ff42-480a-a6a2-803949758e7a/glance-httpd/0.log" Nov 29 08:31:35 crc kubenswrapper[4660]: I1129 08:31:35.640220 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2ec421d-c491-4c1f-9f9d-ec260df3cc87/glance-httpd/0.log" Nov 29 08:31:35 crc kubenswrapper[4660]: I1129 08:31:35.732262 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2ec421d-c491-4c1f-9f9d-ec260df3cc87/glance-log/0.log" Nov 29 08:31:35 crc kubenswrapper[4660]: I1129 08:31:35.985361 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d8477fd94-v56g5_953f9580-5907-45bf-ae44-e48149acc44c/horizon/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.101392 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fnrnv_93142c96-03e4-4441-a738-407379eeb07f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.188313 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d8477fd94-v56g5_953f9580-5907-45bf-ae44-e48149acc44c/horizon-log/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.299747 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-thxwk_955fb591-0de6-4f55-a61f-fc232791fe54/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.454052 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406721-p2zgx_a2ce58ac-319c-47df-b44b-8958659262f8/keystone-cron/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.489116 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54fd458c48-wjcjs_4696d01f-aadd-46fb-b966-f67035bb6ba4/keystone-api/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.536241 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d65ebb5a-68a4-4848-8093-92d49f373550/kube-state-metrics/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.717391 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wnmpm_d0e385e9-5832-4dae-832e-5e155dd48813/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:36 crc kubenswrapper[4660]: I1129 08:31:36.995018 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d5bfc6bd5-zc4q8_b601f952-5ec7-401c-b639-01245efb2379/neutron-httpd/0.log" Nov 29 08:31:37 crc kubenswrapper[4660]: I1129 08:31:37.072994 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d5bfc6bd5-zc4q8_b601f952-5ec7-401c-b639-01245efb2379/neutron-api/0.log" Nov 29 08:31:37 crc kubenswrapper[4660]: I1129 08:31:37.176325 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpfmj_f8a1eabb-ccbc-4ad9-9a51-031f9633f8d7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:37 crc kubenswrapper[4660]: I1129 08:31:37.544529 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2bdf1a62-5e19-4a99-9950-3208cdb8cd0b/nova-api-log/0.log" Nov 29 08:31:37 crc kubenswrapper[4660]: I1129 08:31:37.783779 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fd768c12-7e2d-4283-a390-0f17185cb3ca/nova-cell0-conductor-conductor/0.log" Nov 29 08:31:37 crc kubenswrapper[4660]: I1129 08:31:37.953269 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2bdf1a62-5e19-4a99-9950-3208cdb8cd0b/nova-api-api/0.log" Nov 29 08:31:38 crc kubenswrapper[4660]: I1129 08:31:38.035246 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6933c9c1-60f6-4099-982d-22b279546662/nova-cell1-conductor-conductor/0.log" Nov 29 08:31:38 crc kubenswrapper[4660]: I1129 08:31:38.177326 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_544cff03-d589-4ba8-ac61-e5976fe393d9/nova-cell1-novncproxy-novncproxy/0.log" Nov 29 08:31:38 crc kubenswrapper[4660]: I1129 08:31:38.318758 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-flbrk_f4ebec6a-7674-4948-94b8-51d4f1e6de90/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:38 crc kubenswrapper[4660]: I1129 08:31:38.524408 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8fbec32-e360-48a4-802f-acafba9315fc/nova-metadata-log/0.log" Nov 29 08:31:38 crc kubenswrapper[4660]: I1129 08:31:38.747672 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f0b8bc00-d486-430f-ad6d-483e3372519b/nova-scheduler-scheduler/0.log" Nov 29 08:31:38 crc kubenswrapper[4660]: I1129 08:31:38.869344 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4a1c83c7-2cac-4b54-90c4-080b7f50cd7f/mysql-bootstrap/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.052939 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4a1c83c7-2cac-4b54-90c4-080b7f50cd7f/mysql-bootstrap/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.094405 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4a1c83c7-2cac-4b54-90c4-080b7f50cd7f/galera/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.279394 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910/mysql-bootstrap/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.530762 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910/mysql-bootstrap/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.638002 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb90d2bf-1b0e-4d18-9bff-2d9adb8e3910/galera/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.852115 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wkgc6_3b3e8ed6-00c8-4d4b-a043-a5167ddf6a81/openstack-network-exporter/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.915171 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d541b23c-6413-4bee-834c-96e5d46a9155/openstackclient/0.log" Nov 29 08:31:39 crc kubenswrapper[4660]: I1129 08:31:39.982050 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8fbec32-e360-48a4-802f-acafba9315fc/nova-metadata-metadata/0.log" Nov 29 08:31:40 crc kubenswrapper[4660]: I1129 08:31:40.108510 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovsdb-server-init/0.log" Nov 29 08:31:40 crc kubenswrapper[4660]: I1129 08:31:40.431260 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovsdb-server-init/0.log" Nov 29 08:31:40 crc kubenswrapper[4660]: I1129 08:31:40.475893 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovs-vswitchd/0.log" Nov 29 08:31:40 crc kubenswrapper[4660]: I1129 08:31:40.533680 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rdslv_538da925-a098-483e-a112-334d0930655e/ovsdb-server/0.log" Nov 29 08:31:40 crc kubenswrapper[4660]: I1129 08:31:40.612724 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xdz26_a75569c9-ce83-4515-894c-b067e01f3d9b/ovn-controller/0.log" Nov 29 08:31:40 crc kubenswrapper[4660]: I1129 08:31:40.808215 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-llf58_62033900-fce1-44ce-9b4b-44d61b45123c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:40 crc kubenswrapper[4660]: I1129 08:31:40.884238 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a17c15c7-a4af-4447-b315-8558385d4449/openstack-network-exporter/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.018412 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a17c15c7-a4af-4447-b315-8558385d4449/ovn-northd/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.080521 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_825a377f-a7b3-4a9c-a39c-8e3086eb554f/openstack-network-exporter/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.217195 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_825a377f-a7b3-4a9c-a39c-8e3086eb554f/ovsdbserver-nb/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.348353 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d07487c-33de-4aa4-9878-bcdd17e2a1d9/openstack-network-exporter/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.371032 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6d07487c-33de-4aa4-9878-bcdd17e2a1d9/ovsdbserver-sb/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.681267 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4c5f6f9b-h8nfr_c8432d67-8b8a-43f4-96b5-e852610f702c/placement-log/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.734825 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4c5f6f9b-h8nfr_c8432d67-8b8a-43f4-96b5-e852610f702c/placement-api/0.log" Nov 29 08:31:41 crc kubenswrapper[4660]: I1129 08:31:41.820944 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_147cd78f-2d01-48d5-b43b-eda3532cf537/setup-container/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.087507 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b51d872c-13ff-4e5a-9c3b-dc644c7c19d6/setup-container/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.099746 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_147cd78f-2d01-48d5-b43b-eda3532cf537/setup-container/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.117426 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_147cd78f-2d01-48d5-b43b-eda3532cf537/rabbitmq/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.275309 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b51d872c-13ff-4e5a-9c3b-dc644c7c19d6/setup-container/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.322036 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b51d872c-13ff-4e5a-9c3b-dc644c7c19d6/rabbitmq/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.453528 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-btm5x_36029f28-c187-4b77-afda-fd74d56bd1c5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.533786 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-497rm_7917f022-eed4-4622-a10e-82a72f068b29/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.724703 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-wdswp_f8c2c2ad-2cee-414f-a0df-76351f87c6e0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.994400 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tff2b_6a6ef616-fee3-4bcb-acef-c63943b96e22/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:42 crc kubenswrapper[4660]: I1129 08:31:42.998037 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hcf98_4118c243-9402-4481-abdd-0a5d0581415b/ssh-known-hosts-edpm-deployment/0.log" Nov 29 08:31:43 crc kubenswrapper[4660]: I1129 08:31:43.464527 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75ddc44955-xj8mn_27a79873-e3bd-4172-b5c3-17a981a9a091/proxy-server/0.log" Nov 29 08:31:43 crc kubenswrapper[4660]: I1129 08:31:43.467191 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75ddc44955-xj8mn_27a79873-e3bd-4172-b5c3-17a981a9a091/proxy-httpd/0.log" Nov 29 08:31:43 crc kubenswrapper[4660]: I1129 08:31:43.675483 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5xg97_d487e762-0eca-4f42-aae2-1b8674868db1/swift-ring-rebalance/0.log" Nov 29 08:31:43 crc kubenswrapper[4660]: I1129 08:31:43.755376 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-auditor/0.log" Nov 29 08:31:43 crc kubenswrapper[4660]: I1129 08:31:43.775031 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-reaper/0.log" Nov 29 08:31:43 crc kubenswrapper[4660]: I1129 08:31:43.966305 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-replicator/0.log" Nov 29 08:31:43 crc kubenswrapper[4660]: I1129 08:31:43.978244 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-auditor/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.025244 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/account-server/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.110397 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-replicator/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.220687 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-server/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.259693 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/container-updater/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.375922 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-auditor/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.400573 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-expirer/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.535112 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-replicator/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.576216 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-server/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.702926 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/rsync/0.log" Nov 29 08:31:44 crc kubenswrapper[4660]: I1129 08:31:44.705424 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/object-updater/0.log" Nov 29 08:31:45 crc kubenswrapper[4660]: I1129 08:31:45.119131 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1875d22e-2809-4d96-9cb9-bac77320c5a3/swift-recon-cron/0.log" Nov 29 08:31:45 crc kubenswrapper[4660]: I1129 08:31:45.254395 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wkbdc_fddda6dc-cca7-41a8-8be3-1e6647af2356/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:45 crc kubenswrapper[4660]: I1129 08:31:45.397879 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2731c762-e02a-4472-b014-19739f6c47da/tempest-tests-tempest-tests-runner/0.log" Nov 29 08:31:45 crc kubenswrapper[4660]: I1129 08:31:45.462167 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_efec8b85-1d8a-4f33-b482-a08afe9737bf/test-operator-logs-container/0.log" Nov 29 08:31:45 crc kubenswrapper[4660]: I1129 08:31:45.719469 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9rwss_fb67bcf4-d0ed-4dbb-b571-322a52c4c43f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 29 08:31:47 crc kubenswrapper[4660]: I1129 08:31:47.693777 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:31:47 crc kubenswrapper[4660]: E1129 08:31:47.694136 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:31:56 crc kubenswrapper[4660]: I1129 08:31:56.438194 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_46c3b1d2-02f5-4632-bf44-648754c2e83c/memcached/0.log" Nov 29 08:32:02 crc kubenswrapper[4660]: I1129 08:32:02.693183 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:32:02 crc kubenswrapper[4660]: E1129 08:32:02.693857 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.508379 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hjxz9"] Nov 29 08:32:14 crc kubenswrapper[4660]: E1129 08:32:14.509408 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e4e2c6-8439-48cc-b63a-4549783a4e67" containerName="container-00" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.509424 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e4e2c6-8439-48cc-b63a-4549783a4e67" containerName="container-00" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.511197 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e4e2c6-8439-48cc-b63a-4549783a4e67" containerName="container-00" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.512844 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.529830 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjxz9"] Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.650300 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-utilities\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.650420 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-catalog-content\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.650694 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4stw\" (UniqueName: \"kubernetes.io/projected/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-kube-api-access-j4stw\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.694285 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:32:14 crc kubenswrapper[4660]: E1129 08:32:14.694575 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.752578 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4stw\" (UniqueName: \"kubernetes.io/projected/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-kube-api-access-j4stw\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.752725 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-utilities\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.752827 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-catalog-content\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.753198 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-utilities\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.753253 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-catalog-content\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.773303 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4stw\" (UniqueName: \"kubernetes.io/projected/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-kube-api-access-j4stw\") pod \"certified-operators-hjxz9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:14 crc kubenswrapper[4660]: I1129 08:32:14.832228 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:15 crc kubenswrapper[4660]: I1129 08:32:15.462081 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjxz9"] Nov 29 08:32:16 crc kubenswrapper[4660]: I1129 08:32:16.091722 4660 generic.go:334] "Generic (PLEG): container finished" podID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerID="42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d" exitCode=0 Nov 29 08:32:16 crc kubenswrapper[4660]: I1129 08:32:16.091764 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjxz9" event={"ID":"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9","Type":"ContainerDied","Data":"42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d"} Nov 29 08:32:16 crc kubenswrapper[4660]: I1129 08:32:16.091787 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjxz9" event={"ID":"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9","Type":"ContainerStarted","Data":"48bfb4735a4defbf4a8396181055ab6b68cc283f3448f2affe5947f01bd8cd04"} Nov 29 08:32:17 crc kubenswrapper[4660]: I1129 08:32:17.102832 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjxz9" event={"ID":"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9","Type":"ContainerStarted","Data":"887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87"} Nov 29 08:32:18 crc kubenswrapper[4660]: I1129 08:32:18.116435 4660 generic.go:334] "Generic (PLEG): container finished" podID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerID="887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87" exitCode=0 Nov 29 08:32:18 crc kubenswrapper[4660]: I1129 08:32:18.116486 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjxz9" event={"ID":"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9","Type":"ContainerDied","Data":"887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87"} Nov 29 08:32:19 crc kubenswrapper[4660]: I1129 08:32:19.127748 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjxz9" event={"ID":"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9","Type":"ContainerStarted","Data":"31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8"} Nov 29 08:32:19 crc kubenswrapper[4660]: I1129 08:32:19.151754 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hjxz9" podStartSLOduration=2.623854371 podStartE2EDuration="5.151738074s" podCreationTimestamp="2025-11-29 08:32:14 +0000 UTC" firstStartedPulling="2025-11-29 08:32:16.095102436 +0000 UTC m=+4626.648632355" lastFinishedPulling="2025-11-29 08:32:18.622986119 +0000 UTC m=+4629.176516058" observedRunningTime="2025-11-29 08:32:19.144323172 +0000 UTC m=+4629.697853071" watchObservedRunningTime="2025-11-29 08:32:19.151738074 +0000 UTC m=+4629.705267973" Nov 29 08:32:21 crc kubenswrapper[4660]: I1129 08:32:21.609869 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59d587b55-wqktr_f0b999b3-e302-40ca-a1aa-5173b5655498/kube-rbac-proxy/0.log" Nov 29 08:32:21 crc kubenswrapper[4660]: I1129 08:32:21.643042 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59d587b55-wqktr_f0b999b3-e302-40ca-a1aa-5173b5655498/manager/0.log" Nov 29 08:32:21 crc kubenswrapper[4660]: I1129 08:32:21.832494 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cmgp5_0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd/kube-rbac-proxy/0.log" Nov 29 08:32:21 crc kubenswrapper[4660]: I1129 08:32:21.985438 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cmgp5_0f7f5fdc-8dd7-40cb-88cd-3fd3830101dd/manager/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.001912 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-jdqzs_81afdf1a-a8f8-4f69-8824-192bcf14424c/kube-rbac-proxy/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.240943 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-jdqzs_81afdf1a-a8f8-4f69-8824-192bcf14424c/manager/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.299486 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/util/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.466560 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/util/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.494515 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/pull/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.520725 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/pull/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.734306 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/extract/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.831740 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/util/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.839354 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dfd99bc0be1c9a4ed2a6c4a1157f4b4d9b791054fb872d3e1063d6b8a0v2gdl_1dafe4cc-65b9-45d7-9e59-4d26b6bbea27/pull/0.log" Nov 29 08:32:22 crc kubenswrapper[4660]: I1129 08:32:22.965861 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4gjhw_7ce83127-45e9-4a96-8815-538f3bde77ed/kube-rbac-proxy/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.164128 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v9rs2_29c0443d-0d08-4708-b268-07ae28680e01/kube-rbac-proxy/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.169128 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-4gjhw_7ce83127-45e9-4a96-8815-538f3bde77ed/manager/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.270677 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v9rs2_29c0443d-0d08-4708-b268-07ae28680e01/manager/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.428813 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-cwb2d_d2a4ddee-42a4-451d-9bd7-3028e4680d47/kube-rbac-proxy/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.468426 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-cwb2d_d2a4ddee-42a4-451d-9bd7-3028e4680d47/manager/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.644807 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-vrqgm_a6e93136-e20e-4070-ae0d-db82c3d2b464/kube-rbac-proxy/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.861847 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-vrqgm_a6e93136-e20e-4070-ae0d-db82c3d2b464/manager/0.log" Nov 29 08:32:23 crc kubenswrapper[4660]: I1129 08:32:23.957744 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-2mb85_edf52fa0-02fe-49d3-8368-fe26598027ec/kube-rbac-proxy/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.016589 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-2mb85_edf52fa0-02fe-49d3-8368-fe26598027ec/manager/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.093350 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-b2rlk_96a424c4-d4f3-49c2-94a3-20d236cb207d/kube-rbac-proxy/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.254399 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-b2rlk_96a424c4-d4f3-49c2-94a3-20d236cb207d/manager/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.266780 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-v9g26_08635026-10f5-4929-b9f5-b5d6fcac6d28/kube-rbac-proxy/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.371092 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-v9g26_08635026-10f5-4929-b9f5-b5d6fcac6d28/manager/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.515422 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-7446l_c0579e8a-66e1-4b7c-aaf8-435d07e6e98d/kube-rbac-proxy/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.554041 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-7446l_c0579e8a-66e1-4b7c-aaf8-435d07e6e98d/manager/0.log" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.833795 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.837445 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.946218 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:24 crc kubenswrapper[4660]: I1129 08:32:24.978651 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8cnzr_b191bd3e-cd1b-43c8-99c4-54701a29dfda/manager/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.034595 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-8cnzr_b191bd3e-cd1b-43c8-99c4-54701a29dfda/kube-rbac-proxy/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.239201 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.268691 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-t82nj_1688cfe7-0002-4b5c-916b-ca18c9519de3/kube-rbac-proxy/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.270359 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6c2m6_2badc2b5-6bdb-44b6-8d54-f8763fe78fd6/kube-rbac-proxy/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.287946 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hjxz9"] Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.360004 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-t82nj_1688cfe7-0002-4b5c-916b-ca18c9519de3/manager/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.452569 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-6c2m6_2badc2b5-6bdb-44b6-8d54-f8763fe78fd6/manager/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.576938 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd49blbh_02680922-54f1-494d-a32d-e01b82b9cfd2/kube-rbac-proxy/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.643349 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd49blbh_02680922-54f1-494d-a32d-e01b82b9cfd2/manager/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.966475 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-f9fd8cd-p4sd2_b9aba585-e5b4-47a1-904b-f3f1f86d6251/operator/0.log" Nov 29 08:32:25 crc kubenswrapper[4660]: I1129 08:32:25.988209 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6hcx4_8afce996-f777-4ef3-a57d-d09faabc1b46/registry-server/0.log" Nov 29 08:32:26 crc kubenswrapper[4660]: I1129 08:32:26.543242 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-z5n6s_eb02d6d1-14c5-409f-8c54-60e35f909a84/kube-rbac-proxy/0.log" Nov 29 08:32:26 crc kubenswrapper[4660]: I1129 08:32:26.564413 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-z5n6s_eb02d6d1-14c5-409f-8c54-60e35f909a84/manager/0.log" Nov 29 08:32:26 crc kubenswrapper[4660]: I1129 08:32:26.694636 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:32:26 crc kubenswrapper[4660]: E1129 08:32:26.697323 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:32:26 crc kubenswrapper[4660]: I1129 08:32:26.778766 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7fb5f7cfbf-7dwbm_e676373b-cd82-4455-ae35-62c31e458d5d/manager/0.log" Nov 29 08:32:26 crc kubenswrapper[4660]: I1129 08:32:26.824314 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-95ndx_d56ee9fc-8151-4442-b491-1e5c8faf48c4/kube-rbac-proxy/0.log" Nov 29 08:32:26 crc kubenswrapper[4660]: I1129 08:32:26.838479 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-95ndx_d56ee9fc-8151-4442-b491-1e5c8faf48c4/manager/0.log" Nov 29 08:32:26 crc kubenswrapper[4660]: I1129 08:32:26.933848 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8vp89_9ee27942-cb74-4ee0-b4b9-9f995b6604a4/operator/0.log" Nov 29 08:32:27 crc kubenswrapper[4660]: I1129 08:32:27.022962 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-724c7_e512b840-83f6-47dc-b5ed-669807cc2878/manager/0.log" Nov 29 08:32:27 crc kubenswrapper[4660]: I1129 08:32:27.091371 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-724c7_e512b840-83f6-47dc-b5ed-669807cc2878/kube-rbac-proxy/0.log" Nov 29 08:32:27 crc kubenswrapper[4660]: I1129 08:32:27.178368 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-4zn9g_01080af3-022a-430c-a9cc-b9b98f5214de/kube-rbac-proxy/0.log" Nov 29 08:32:27 crc kubenswrapper[4660]: I1129 08:32:27.225365 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hjxz9" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="registry-server" containerID="cri-o://31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8" gracePeriod=2 Nov 29 08:32:27 crc kubenswrapper[4660]: I1129 08:32:27.370465 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-4zn9g_01080af3-022a-430c-a9cc-b9b98f5214de/manager/0.log" Nov 29 08:32:27 crc kubenswrapper[4660]: I1129 08:32:27.445169 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-mw22w_e0c70c45-673e-47e6-80cd-99bbfbe6e695/manager/0.log" Nov 29 08:32:27 crc kubenswrapper[4660]: I1129 08:32:27.453669 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-mw22w_e0c70c45-673e-47e6-80cd-99bbfbe6e695/kube-rbac-proxy/0.log" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.013686 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.140936 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hsm2_4747fced-480f-4185-b4e3-2dedd7f05614/kube-rbac-proxy/0.log" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.183236 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hsm2_4747fced-480f-4185-b4e3-2dedd7f05614/manager/0.log" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.204294 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4stw\" (UniqueName: \"kubernetes.io/projected/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-kube-api-access-j4stw\") pod \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.204419 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-utilities\") pod \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.204648 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-catalog-content\") pod \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\" (UID: \"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9\") " Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.205165 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-utilities" (OuterVolumeSpecName: "utilities") pod "6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" (UID: "6ac446c4-1114-46d7-898a-f6c5e7c9f5e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.209727 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-kube-api-access-j4stw" (OuterVolumeSpecName: "kube-api-access-j4stw") pod "6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" (UID: "6ac446c4-1114-46d7-898a-f6c5e7c9f5e9"). InnerVolumeSpecName "kube-api-access-j4stw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.211694 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.211721 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4stw\" (UniqueName: \"kubernetes.io/projected/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-kube-api-access-j4stw\") on node \"crc\" DevicePath \"\"" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.235178 4660 generic.go:334] "Generic (PLEG): container finished" podID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerID="31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8" exitCode=0 Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.235247 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjxz9" event={"ID":"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9","Type":"ContainerDied","Data":"31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8"} Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.235283 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjxz9" event={"ID":"6ac446c4-1114-46d7-898a-f6c5e7c9f5e9","Type":"ContainerDied","Data":"48bfb4735a4defbf4a8396181055ab6b68cc283f3448f2affe5947f01bd8cd04"} Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.235314 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjxz9" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.235327 4660 scope.go:117] "RemoveContainer" containerID="31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.247104 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" (UID: "6ac446c4-1114-46d7-898a-f6c5e7c9f5e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.280222 4660 scope.go:117] "RemoveContainer" containerID="887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.295423 4660 scope.go:117] "RemoveContainer" containerID="42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.313098 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.345276 4660 scope.go:117] "RemoveContainer" containerID="31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8" Nov 29 08:32:28 crc kubenswrapper[4660]: E1129 08:32:28.348306 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8\": container with ID starting with 31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8 not found: ID does not exist" containerID="31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.348359 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8"} err="failed to get container status \"31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8\": rpc error: code = NotFound desc = could not find container \"31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8\": container with ID starting with 31816623f07a6501f71eda183747fa75363593dd417c1082d4ade9e82a4973e8 not found: ID does not exist" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.348388 4660 scope.go:117] "RemoveContainer" containerID="887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87" Nov 29 08:32:28 crc kubenswrapper[4660]: E1129 08:32:28.350804 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87\": container with ID starting with 887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87 not found: ID does not exist" containerID="887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.350833 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87"} err="failed to get container status \"887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87\": rpc error: code = NotFound desc = could not find container \"887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87\": container with ID starting with 887659322e6c0461ee78de9884a25d88497f9ce8e6aa5b7f773a4fe8420adf87 not found: ID does not exist" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.350850 4660 scope.go:117] "RemoveContainer" containerID="42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d" Nov 29 08:32:28 crc kubenswrapper[4660]: E1129 08:32:28.353953 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d\": container with ID starting with 42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d not found: ID does not exist" containerID="42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.353997 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d"} err="failed to get container status \"42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d\": rpc error: code = NotFound desc = could not find container \"42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d\": container with ID starting with 42f9604abef087a81a9ae6447a6fabfca31058d5c79d550c99110ca19d71fd1d not found: ID does not exist" Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.575695 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hjxz9"] Nov 29 08:32:28 crc kubenswrapper[4660]: I1129 08:32:28.586895 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hjxz9"] Nov 29 08:32:29 crc kubenswrapper[4660]: I1129 08:32:29.706571 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" path="/var/lib/kubelet/pods/6ac446c4-1114-46d7-898a-f6c5e7c9f5e9/volumes" Nov 29 08:32:41 crc kubenswrapper[4660]: I1129 08:32:41.697564 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:32:41 crc kubenswrapper[4660]: E1129 08:32:41.698282 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:32:52 crc kubenswrapper[4660]: I1129 08:32:52.257446 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-znn4f_a6fc6ac1-6b93-4e45-a741-9df933ea2d11/control-plane-machine-set-operator/0.log" Nov 29 08:32:52 crc kubenswrapper[4660]: I1129 08:32:52.456787 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7j5ts_133a42bf-5cdf-4614-8a42-4ce3e350481e/kube-rbac-proxy/0.log" Nov 29 08:32:52 crc kubenswrapper[4660]: I1129 08:32:52.482042 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7j5ts_133a42bf-5cdf-4614-8a42-4ce3e350481e/machine-api-operator/0.log" Nov 29 08:32:56 crc kubenswrapper[4660]: I1129 08:32:56.693738 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:32:56 crc kubenswrapper[4660]: E1129 08:32:56.694508 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:33:07 crc kubenswrapper[4660]: I1129 08:33:07.289038 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-vsxjs_c7d3889b-9b53-40ae-9a2e-39e7080e11c9/cert-manager-controller/0.log" Nov 29 08:33:07 crc kubenswrapper[4660]: I1129 08:33:07.375105 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-s4hsk_bb0d8d41-b2d2-432b-865f-0069bd153d0a/cert-manager-cainjector/0.log" Nov 29 08:33:07 crc kubenswrapper[4660]: I1129 08:33:07.560735 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-b4x7w_fd8f5350-5025-49b7-85c6-5f7c1d5724a7/cert-manager-webhook/0.log" Nov 29 08:33:10 crc kubenswrapper[4660]: I1129 08:33:10.693796 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:33:10 crc kubenswrapper[4660]: E1129 08:33:10.694672 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.241855 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65w79"] Nov 29 08:33:19 crc kubenswrapper[4660]: E1129 08:33:19.242881 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="extract-utilities" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.242900 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="extract-utilities" Nov 29 08:33:19 crc kubenswrapper[4660]: E1129 08:33:19.242916 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="extract-content" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.242924 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="extract-content" Nov 29 08:33:19 crc kubenswrapper[4660]: E1129 08:33:19.242939 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="registry-server" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.242946 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="registry-server" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.243181 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac446c4-1114-46d7-898a-f6c5e7c9f5e9" containerName="registry-server" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.245923 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.254898 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65w79"] Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.408028 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-utilities\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.408373 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwrt\" (UniqueName: \"kubernetes.io/projected/3b395176-6900-4928-9710-f25318843b18-kube-api-access-nnwrt\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.408400 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-catalog-content\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.509711 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-utilities\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.509821 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwrt\" (UniqueName: \"kubernetes.io/projected/3b395176-6900-4928-9710-f25318843b18-kube-api-access-nnwrt\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.509842 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-catalog-content\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.510393 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-catalog-content\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.510695 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-utilities\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.544817 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwrt\" (UniqueName: \"kubernetes.io/projected/3b395176-6900-4928-9710-f25318843b18-kube-api-access-nnwrt\") pod \"redhat-operators-65w79\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:19 crc kubenswrapper[4660]: I1129 08:33:19.566797 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:20 crc kubenswrapper[4660]: I1129 08:33:20.070326 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65w79"] Nov 29 08:33:20 crc kubenswrapper[4660]: I1129 08:33:20.556332 4660 generic.go:334] "Generic (PLEG): container finished" podID="3b395176-6900-4928-9710-f25318843b18" containerID="77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20" exitCode=0 Nov 29 08:33:20 crc kubenswrapper[4660]: I1129 08:33:20.556452 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65w79" event={"ID":"3b395176-6900-4928-9710-f25318843b18","Type":"ContainerDied","Data":"77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20"} Nov 29 08:33:20 crc kubenswrapper[4660]: I1129 08:33:20.556643 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65w79" event={"ID":"3b395176-6900-4928-9710-f25318843b18","Type":"ContainerStarted","Data":"f740cb9aafbec83601941b13671b6f6c2fcc40d11e2279a81c1c112fcd50ccb3"} Nov 29 08:33:20 crc kubenswrapper[4660]: I1129 08:33:20.558935 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:33:21 crc kubenswrapper[4660]: I1129 08:33:21.453432 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-54kd5_1f69a645-8449-4c71-abdb-2d9a1413eae0/nmstate-console-plugin/0.log" Nov 29 08:33:21 crc kubenswrapper[4660]: I1129 08:33:21.531302 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hzjdq_00b98def-8412-4510-a607-30ea7c13600d/nmstate-handler/0.log" Nov 29 08:33:21 crc kubenswrapper[4660]: I1129 08:33:21.696512 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gxczb_abb40e0e-8d39-4ede-a762-2968c5ae46a1/kube-rbac-proxy/0.log" Nov 29 08:33:21 crc kubenswrapper[4660]: I1129 08:33:21.783065 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gxczb_abb40e0e-8d39-4ede-a762-2968c5ae46a1/nmstate-metrics/0.log" Nov 29 08:33:21 crc kubenswrapper[4660]: I1129 08:33:21.896985 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-bcpsd_8355ccfb-5f01-461d-9aca-89e61881e1d2/nmstate-operator/0.log" Nov 29 08:33:21 crc kubenswrapper[4660]: I1129 08:33:21.998728 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-ds7np_c3aaf1b2-a146-43cd-91ab-8ee65cff6e44/nmstate-webhook/0.log" Nov 29 08:33:22 crc kubenswrapper[4660]: I1129 08:33:22.577716 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65w79" event={"ID":"3b395176-6900-4928-9710-f25318843b18","Type":"ContainerStarted","Data":"7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59"} Nov 29 08:33:24 crc kubenswrapper[4660]: I1129 08:33:24.693275 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:33:24 crc kubenswrapper[4660]: E1129 08:33:24.694148 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:33:25 crc kubenswrapper[4660]: I1129 08:33:25.608350 4660 generic.go:334] "Generic (PLEG): container finished" podID="3b395176-6900-4928-9710-f25318843b18" containerID="7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59" exitCode=0 Nov 29 08:33:25 crc kubenswrapper[4660]: I1129 08:33:25.608429 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65w79" event={"ID":"3b395176-6900-4928-9710-f25318843b18","Type":"ContainerDied","Data":"7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59"} Nov 29 08:33:26 crc kubenswrapper[4660]: I1129 08:33:26.618523 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65w79" event={"ID":"3b395176-6900-4928-9710-f25318843b18","Type":"ContainerStarted","Data":"d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944"} Nov 29 08:33:26 crc kubenswrapper[4660]: I1129 08:33:26.636975 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65w79" podStartSLOduration=2.015202122 podStartE2EDuration="7.636956512s" podCreationTimestamp="2025-11-29 08:33:19 +0000 UTC" firstStartedPulling="2025-11-29 08:33:20.558577411 +0000 UTC m=+4691.112107310" lastFinishedPulling="2025-11-29 08:33:26.180331801 +0000 UTC m=+4696.733861700" observedRunningTime="2025-11-29 08:33:26.635518003 +0000 UTC m=+4697.189047902" watchObservedRunningTime="2025-11-29 08:33:26.636956512 +0000 UTC m=+4697.190486421" Nov 29 08:33:29 crc kubenswrapper[4660]: I1129 08:33:29.567470 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:29 crc kubenswrapper[4660]: I1129 08:33:29.568528 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:30 crc kubenswrapper[4660]: I1129 08:33:30.614978 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-65w79" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="registry-server" probeResult="failure" output=< Nov 29 08:33:30 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Nov 29 08:33:30 crc kubenswrapper[4660]: > Nov 29 08:33:38 crc kubenswrapper[4660]: I1129 08:33:38.693851 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:33:39 crc kubenswrapper[4660]: I1129 08:33:39.404423 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cdr7b_fb85aed1-c862-47ce-84e9-e5d44218faff/kube-rbac-proxy/0.log" Nov 29 08:33:39 crc kubenswrapper[4660]: I1129 08:33:39.549159 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cdr7b_fb85aed1-c862-47ce-84e9-e5d44218faff/controller/0.log" Nov 29 08:33:39 crc kubenswrapper[4660]: I1129 08:33:39.642038 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:39 crc kubenswrapper[4660]: I1129 08:33:39.708452 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:39 crc kubenswrapper[4660]: I1129 08:33:39.786998 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"d83827a77d012bf7e05cfefbc568ae124071355a1dc6003bb1065f52cd76371a"} Nov 29 08:33:39 crc kubenswrapper[4660]: I1129 08:33:39.859166 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:33:39 crc kubenswrapper[4660]: I1129 08:33:39.887374 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65w79"] Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.050216 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.081440 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.095604 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.101431 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.344836 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.382948 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.404767 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.407336 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.551569 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-frr-files/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.591485 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/controller/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.605417 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-reloader/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.616948 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/cp-metrics/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.813946 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-65w79" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="registry-server" containerID="cri-o://d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944" gracePeriod=2 Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.814936 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/frr-metrics/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.867575 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/kube-rbac-proxy/0.log" Nov 29 08:33:40 crc kubenswrapper[4660]: I1129 08:33:40.905232 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/kube-rbac-proxy-frr/0.log" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.154680 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/reloader/0.log" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.254240 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-pf7m4_2ea3483d-b488-4691-b2f6-3bdb54b0ef49/frr-k8s-webhook-server/0.log" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.372349 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.456056 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnwrt\" (UniqueName: \"kubernetes.io/projected/3b395176-6900-4928-9710-f25318843b18-kube-api-access-nnwrt\") pod \"3b395176-6900-4928-9710-f25318843b18\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.456319 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-utilities\") pod \"3b395176-6900-4928-9710-f25318843b18\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.456383 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-catalog-content\") pod \"3b395176-6900-4928-9710-f25318843b18\" (UID: \"3b395176-6900-4928-9710-f25318843b18\") " Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.458302 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-utilities" (OuterVolumeSpecName: "utilities") pod "3b395176-6900-4928-9710-f25318843b18" (UID: "3b395176-6900-4928-9710-f25318843b18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.467812 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b395176-6900-4928-9710-f25318843b18-kube-api-access-nnwrt" (OuterVolumeSpecName: "kube-api-access-nnwrt") pod "3b395176-6900-4928-9710-f25318843b18" (UID: "3b395176-6900-4928-9710-f25318843b18"). InnerVolumeSpecName "kube-api-access-nnwrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.539231 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6cfc5c9847-cf8qp_0f9a4dcf-c281-4ce1-93aa-e2d82c0bda87/manager/0.log" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.559029 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.559071 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnwrt\" (UniqueName: \"kubernetes.io/projected/3b395176-6900-4928-9710-f25318843b18-kube-api-access-nnwrt\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.571980 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b395176-6900-4928-9710-f25318843b18" (UID: "3b395176-6900-4928-9710-f25318843b18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.660290 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b395176-6900-4928-9710-f25318843b18-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.825579 4660 generic.go:334] "Generic (PLEG): container finished" podID="3b395176-6900-4928-9710-f25318843b18" containerID="d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944" exitCode=0 Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.825693 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65w79" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.825697 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65w79" event={"ID":"3b395176-6900-4928-9710-f25318843b18","Type":"ContainerDied","Data":"d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944"} Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.825886 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65w79" event={"ID":"3b395176-6900-4928-9710-f25318843b18","Type":"ContainerDied","Data":"f740cb9aafbec83601941b13671b6f6c2fcc40d11e2279a81c1c112fcd50ccb3"} Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.825916 4660 scope.go:117] "RemoveContainer" containerID="d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.866228 4660 scope.go:117] "RemoveContainer" containerID="7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.881295 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84c66bf9fd-dsq4c_28d7af7a-86cc-4ceb-bc24-eab722a9813a/webhook-server/0.log" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.891925 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65w79"] Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.902680 4660 scope.go:117] "RemoveContainer" containerID="77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.907315 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-65w79"] Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.948594 4660 scope.go:117] "RemoveContainer" containerID="d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944" Nov 29 08:33:41 crc kubenswrapper[4660]: E1129 08:33:41.949049 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944\": container with ID starting with d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944 not found: ID does not exist" containerID="d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.949075 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944"} err="failed to get container status \"d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944\": rpc error: code = NotFound desc = could not find container \"d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944\": container with ID starting with d29879b20384cf954c1b13f994d31a89a72b104b2fc46a29b1afb1adfad3e944 not found: ID does not exist" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.949097 4660 scope.go:117] "RemoveContainer" containerID="7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59" Nov 29 08:33:41 crc kubenswrapper[4660]: E1129 08:33:41.950842 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59\": container with ID starting with 7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59 not found: ID does not exist" containerID="7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.950864 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59"} err="failed to get container status \"7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59\": rpc error: code = NotFound desc = could not find container \"7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59\": container with ID starting with 7ad7983c10634b2c3c349d7012a5134b4d0353d34620d5a712da8a8a1a63ba59 not found: ID does not exist" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.950880 4660 scope.go:117] "RemoveContainer" containerID="77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20" Nov 29 08:33:41 crc kubenswrapper[4660]: E1129 08:33:41.951168 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20\": container with ID starting with 77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20 not found: ID does not exist" containerID="77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20" Nov 29 08:33:41 crc kubenswrapper[4660]: I1129 08:33:41.951207 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20"} err="failed to get container status \"77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20\": rpc error: code = NotFound desc = could not find container \"77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20\": container with ID starting with 77c8ddaade3c390070dcfd1cb133a71c87d3c71a0583a4125f17df8327855f20 not found: ID does not exist" Nov 29 08:33:42 crc kubenswrapper[4660]: I1129 08:33:42.113546 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcx42_ff906a3b-62c0-4073-afaf-67e927a77020/kube-rbac-proxy/0.log" Nov 29 08:33:42 crc kubenswrapper[4660]: I1129 08:33:42.194799 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szl5x_05fec9d8-e898-467e-9938-33ce089b3d15/frr/0.log" Nov 29 08:33:42 crc kubenswrapper[4660]: I1129 08:33:42.536035 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcx42_ff906a3b-62c0-4073-afaf-67e927a77020/speaker/0.log" Nov 29 08:33:43 crc kubenswrapper[4660]: I1129 08:33:43.704257 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b395176-6900-4928-9710-f25318843b18" path="/var/lib/kubelet/pods/3b395176-6900-4928-9710-f25318843b18/volumes" Nov 29 08:33:55 crc kubenswrapper[4660]: I1129 08:33:55.603552 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/util/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.012409 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/pull/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.053333 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/pull/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.053885 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/util/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.209086 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/util/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.276928 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/extract/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.330347 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f5r2km_ce7c0bf6-a2b1-40a0-b4bb-997251bda272/pull/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.440514 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/util/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.626488 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/pull/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.658139 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/pull/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.661262 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/util/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.843895 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/util/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.864895 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/extract/0.log" Nov 29 08:33:56 crc kubenswrapper[4660]: I1129 08:33:56.873028 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83975tm_60b7eb5e-6d0c-47e0-bdfe-20c1069056a9/pull/0.log" Nov 29 08:33:57 crc kubenswrapper[4660]: I1129 08:33:57.062182 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-utilities/0.log" Nov 29 08:33:57 crc kubenswrapper[4660]: I1129 08:33:57.254151 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-utilities/0.log" Nov 29 08:33:57 crc kubenswrapper[4660]: I1129 08:33:57.269070 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-content/0.log" Nov 29 08:33:57 crc kubenswrapper[4660]: I1129 08:33:57.280023 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-content/0.log" Nov 29 08:33:57 crc kubenswrapper[4660]: I1129 08:33:57.482908 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-utilities/0.log" Nov 29 08:33:57 crc kubenswrapper[4660]: I1129 08:33:57.498977 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/extract-content/0.log" Nov 29 08:33:57 crc kubenswrapper[4660]: I1129 08:33:57.849292 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-utilities/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.059088 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d266_c1d8cc32-31a1-4eb6-866d-ce7bc2082570/registry-server/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.108105 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-content/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.129379 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-utilities/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.187821 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-content/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.270240 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-utilities/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.311847 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/extract-content/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.559800 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4msqn_f9482d0d-cad1-43a2-a0f9-523323125ae2/marketplace-operator/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.602228 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nqr6r_2f58b902-a7d3-41b2-8172-b56e91d6010d/registry-server/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.712657 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-utilities/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.878082 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-content/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.895833 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-utilities/0.log" Nov 29 08:33:58 crc kubenswrapper[4660]: I1129 08:33:58.960209 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-content/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.071635 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-utilities/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.077417 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/extract-content/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.254526 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-57l9d_19f67b0c-c303-4c77-84a8-5b3e11bac292/registry-server/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.315143 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-utilities/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.503901 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-utilities/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.525893 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-content/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.526115 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-content/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.745001 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-utilities/0.log" Nov 29 08:33:59 crc kubenswrapper[4660]: I1129 08:33:59.765634 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/extract-content/0.log" Nov 29 08:34:00 crc kubenswrapper[4660]: I1129 08:34:00.230524 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l9mbq_eacee01a-4708-4371-8721-a6ae49dd8f01/registry-server/0.log" Nov 29 08:35:55 crc kubenswrapper[4660]: I1129 08:35:55.884597 4660 generic.go:334] "Generic (PLEG): container finished" podID="9126939e-659d-4706-bbab-3ddb63a1be16" containerID="29a90d9f8ab6f6e5fdd6909a74259035b5b8835a518cac50cd13a4a47c669545" exitCode=0 Nov 29 08:35:55 crc kubenswrapper[4660]: I1129 08:35:55.884652 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nbmbp/must-gather-phc4j" event={"ID":"9126939e-659d-4706-bbab-3ddb63a1be16","Type":"ContainerDied","Data":"29a90d9f8ab6f6e5fdd6909a74259035b5b8835a518cac50cd13a4a47c669545"} Nov 29 08:35:55 crc kubenswrapper[4660]: I1129 08:35:55.887403 4660 scope.go:117] "RemoveContainer" containerID="29a90d9f8ab6f6e5fdd6909a74259035b5b8835a518cac50cd13a4a47c669545" Nov 29 08:35:56 crc kubenswrapper[4660]: I1129 08:35:56.914737 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nbmbp_must-gather-phc4j_9126939e-659d-4706-bbab-3ddb63a1be16/gather/0.log" Nov 29 08:36:05 crc kubenswrapper[4660]: I1129 08:36:05.499882 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:36:05 crc kubenswrapper[4660]: I1129 08:36:05.501355 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:36:06 crc kubenswrapper[4660]: I1129 08:36:06.846174 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nbmbp/must-gather-phc4j"] Nov 29 08:36:06 crc kubenswrapper[4660]: I1129 08:36:06.846830 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nbmbp/must-gather-phc4j" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" containerName="copy" containerID="cri-o://0b60fb27dab3803b540d56a2c692d61e0d8079b85003e7e8437f868abdc65913" gracePeriod=2 Nov 29 08:36:06 crc kubenswrapper[4660]: I1129 08:36:06.859223 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nbmbp/must-gather-phc4j"] Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.068704 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nbmbp_must-gather-phc4j_9126939e-659d-4706-bbab-3ddb63a1be16/copy/0.log" Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.070329 4660 generic.go:334] "Generic (PLEG): container finished" podID="9126939e-659d-4706-bbab-3ddb63a1be16" containerID="0b60fb27dab3803b540d56a2c692d61e0d8079b85003e7e8437f868abdc65913" exitCode=143 Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.291290 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nbmbp_must-gather-phc4j_9126939e-659d-4706-bbab-3ddb63a1be16/copy/0.log" Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.291893 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.371299 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2fls\" (UniqueName: \"kubernetes.io/projected/9126939e-659d-4706-bbab-3ddb63a1be16-kube-api-access-b2fls\") pod \"9126939e-659d-4706-bbab-3ddb63a1be16\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.371445 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9126939e-659d-4706-bbab-3ddb63a1be16-must-gather-output\") pod \"9126939e-659d-4706-bbab-3ddb63a1be16\" (UID: \"9126939e-659d-4706-bbab-3ddb63a1be16\") " Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.388944 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9126939e-659d-4706-bbab-3ddb63a1be16-kube-api-access-b2fls" (OuterVolumeSpecName: "kube-api-access-b2fls") pod "9126939e-659d-4706-bbab-3ddb63a1be16" (UID: "9126939e-659d-4706-bbab-3ddb63a1be16"). InnerVolumeSpecName "kube-api-access-b2fls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.472956 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2fls\" (UniqueName: \"kubernetes.io/projected/9126939e-659d-4706-bbab-3ddb63a1be16-kube-api-access-b2fls\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.551215 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9126939e-659d-4706-bbab-3ddb63a1be16-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9126939e-659d-4706-bbab-3ddb63a1be16" (UID: "9126939e-659d-4706-bbab-3ddb63a1be16"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.575130 4660 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9126939e-659d-4706-bbab-3ddb63a1be16-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:07 crc kubenswrapper[4660]: I1129 08:36:07.705490 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" path="/var/lib/kubelet/pods/9126939e-659d-4706-bbab-3ddb63a1be16/volumes" Nov 29 08:36:08 crc kubenswrapper[4660]: I1129 08:36:08.079406 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nbmbp_must-gather-phc4j_9126939e-659d-4706-bbab-3ddb63a1be16/copy/0.log" Nov 29 08:36:08 crc kubenswrapper[4660]: I1129 08:36:08.080107 4660 scope.go:117] "RemoveContainer" containerID="0b60fb27dab3803b540d56a2c692d61e0d8079b85003e7e8437f868abdc65913" Nov 29 08:36:08 crc kubenswrapper[4660]: I1129 08:36:08.080187 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nbmbp/must-gather-phc4j" Nov 29 08:36:08 crc kubenswrapper[4660]: I1129 08:36:08.108789 4660 scope.go:117] "RemoveContainer" containerID="29a90d9f8ab6f6e5fdd6909a74259035b5b8835a518cac50cd13a4a47c669545" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.587672 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nns6t"] Nov 29 08:36:19 crc kubenswrapper[4660]: E1129 08:36:19.588679 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" containerName="gather" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588694 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" containerName="gather" Nov 29 08:36:19 crc kubenswrapper[4660]: E1129 08:36:19.588707 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="extract-utilities" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588713 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="extract-utilities" Nov 29 08:36:19 crc kubenswrapper[4660]: E1129 08:36:19.588725 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="registry-server" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588732 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="registry-server" Nov 29 08:36:19 crc kubenswrapper[4660]: E1129 08:36:19.588746 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" containerName="copy" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588751 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" containerName="copy" Nov 29 08:36:19 crc kubenswrapper[4660]: E1129 08:36:19.588764 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="extract-content" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588771 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="extract-content" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588938 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" containerName="gather" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588959 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b395176-6900-4928-9710-f25318843b18" containerName="registry-server" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.588976 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="9126939e-659d-4706-bbab-3ddb63a1be16" containerName="copy" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.590499 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.612117 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nns6t"] Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.733137 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-utilities\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.733269 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqz72\" (UniqueName: \"kubernetes.io/projected/a73ce820-dd51-4999-b887-05ff140430fd-kube-api-access-qqz72\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.733353 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-catalog-content\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.835092 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqz72\" (UniqueName: \"kubernetes.io/projected/a73ce820-dd51-4999-b887-05ff140430fd-kube-api-access-qqz72\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.835226 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-catalog-content\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.835305 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-utilities\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.836103 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-catalog-content\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.836983 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-utilities\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.864672 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqz72\" (UniqueName: \"kubernetes.io/projected/a73ce820-dd51-4999-b887-05ff140430fd-kube-api-access-qqz72\") pod \"community-operators-nns6t\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:19 crc kubenswrapper[4660]: I1129 08:36:19.910457 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:20 crc kubenswrapper[4660]: I1129 08:36:20.560985 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nns6t"] Nov 29 08:36:21 crc kubenswrapper[4660]: I1129 08:36:21.218054 4660 generic.go:334] "Generic (PLEG): container finished" podID="a73ce820-dd51-4999-b887-05ff140430fd" containerID="9fa51a7318a006b1df4175ad0b6a0a6e04be21ac026f0b5867a0eb8bfa617410" exitCode=0 Nov 29 08:36:21 crc kubenswrapper[4660]: I1129 08:36:21.218114 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nns6t" event={"ID":"a73ce820-dd51-4999-b887-05ff140430fd","Type":"ContainerDied","Data":"9fa51a7318a006b1df4175ad0b6a0a6e04be21ac026f0b5867a0eb8bfa617410"} Nov 29 08:36:21 crc kubenswrapper[4660]: I1129 08:36:21.218360 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nns6t" event={"ID":"a73ce820-dd51-4999-b887-05ff140430fd","Type":"ContainerStarted","Data":"26d2a898a6602ee947e6405cd88eb48c215a9e025a8bae9101c39b2dc5130516"} Nov 29 08:36:22 crc kubenswrapper[4660]: I1129 08:36:22.236631 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nns6t" event={"ID":"a73ce820-dd51-4999-b887-05ff140430fd","Type":"ContainerStarted","Data":"dd339e9094347cf96ef781790799108aa914045c20c1237111ef7354a484901f"} Nov 29 08:36:23 crc kubenswrapper[4660]: I1129 08:36:23.247706 4660 generic.go:334] "Generic (PLEG): container finished" podID="a73ce820-dd51-4999-b887-05ff140430fd" containerID="dd339e9094347cf96ef781790799108aa914045c20c1237111ef7354a484901f" exitCode=0 Nov 29 08:36:23 crc kubenswrapper[4660]: I1129 08:36:23.247747 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nns6t" event={"ID":"a73ce820-dd51-4999-b887-05ff140430fd","Type":"ContainerDied","Data":"dd339e9094347cf96ef781790799108aa914045c20c1237111ef7354a484901f"} Nov 29 08:36:24 crc kubenswrapper[4660]: I1129 08:36:24.256998 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nns6t" event={"ID":"a73ce820-dd51-4999-b887-05ff140430fd","Type":"ContainerStarted","Data":"8ee60be0a10b5ca760914e2676cf2473bd3d8ac46cdb3f897d814d55ab1072e7"} Nov 29 08:36:24 crc kubenswrapper[4660]: I1129 08:36:24.283777 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nns6t" podStartSLOduration=2.809244838 podStartE2EDuration="5.283758897s" podCreationTimestamp="2025-11-29 08:36:19 +0000 UTC" firstStartedPulling="2025-11-29 08:36:21.222860364 +0000 UTC m=+4871.776390263" lastFinishedPulling="2025-11-29 08:36:23.697374433 +0000 UTC m=+4874.250904322" observedRunningTime="2025-11-29 08:36:24.278634148 +0000 UTC m=+4874.832164067" watchObservedRunningTime="2025-11-29 08:36:24.283758897 +0000 UTC m=+4874.837288796" Nov 29 08:36:29 crc kubenswrapper[4660]: I1129 08:36:29.911433 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:29 crc kubenswrapper[4660]: I1129 08:36:29.912380 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:30 crc kubenswrapper[4660]: I1129 08:36:30.002001 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:30 crc kubenswrapper[4660]: I1129 08:36:30.374188 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:30 crc kubenswrapper[4660]: I1129 08:36:30.440393 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nns6t"] Nov 29 08:36:32 crc kubenswrapper[4660]: I1129 08:36:32.338913 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nns6t" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="registry-server" containerID="cri-o://8ee60be0a10b5ca760914e2676cf2473bd3d8ac46cdb3f897d814d55ab1072e7" gracePeriod=2 Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.353177 4660 generic.go:334] "Generic (PLEG): container finished" podID="a73ce820-dd51-4999-b887-05ff140430fd" containerID="8ee60be0a10b5ca760914e2676cf2473bd3d8ac46cdb3f897d814d55ab1072e7" exitCode=0 Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.353319 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nns6t" event={"ID":"a73ce820-dd51-4999-b887-05ff140430fd","Type":"ContainerDied","Data":"8ee60be0a10b5ca760914e2676cf2473bd3d8ac46cdb3f897d814d55ab1072e7"} Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.643305 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.710887 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-utilities\") pod \"a73ce820-dd51-4999-b887-05ff140430fd\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.711004 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-catalog-content\") pod \"a73ce820-dd51-4999-b887-05ff140430fd\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.711125 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqz72\" (UniqueName: \"kubernetes.io/projected/a73ce820-dd51-4999-b887-05ff140430fd-kube-api-access-qqz72\") pod \"a73ce820-dd51-4999-b887-05ff140430fd\" (UID: \"a73ce820-dd51-4999-b887-05ff140430fd\") " Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.714126 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-utilities" (OuterVolumeSpecName: "utilities") pod "a73ce820-dd51-4999-b887-05ff140430fd" (UID: "a73ce820-dd51-4999-b887-05ff140430fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.725954 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73ce820-dd51-4999-b887-05ff140430fd-kube-api-access-qqz72" (OuterVolumeSpecName: "kube-api-access-qqz72") pod "a73ce820-dd51-4999-b887-05ff140430fd" (UID: "a73ce820-dd51-4999-b887-05ff140430fd"). InnerVolumeSpecName "kube-api-access-qqz72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.786034 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a73ce820-dd51-4999-b887-05ff140430fd" (UID: "a73ce820-dd51-4999-b887-05ff140430fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.820035 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqz72\" (UniqueName: \"kubernetes.io/projected/a73ce820-dd51-4999-b887-05ff140430fd-kube-api-access-qqz72\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.820078 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:33 crc kubenswrapper[4660]: I1129 08:36:33.820090 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73ce820-dd51-4999-b887-05ff140430fd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:36:34 crc kubenswrapper[4660]: I1129 08:36:34.368928 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nns6t" event={"ID":"a73ce820-dd51-4999-b887-05ff140430fd","Type":"ContainerDied","Data":"26d2a898a6602ee947e6405cd88eb48c215a9e025a8bae9101c39b2dc5130516"} Nov 29 08:36:34 crc kubenswrapper[4660]: I1129 08:36:34.369330 4660 scope.go:117] "RemoveContainer" containerID="8ee60be0a10b5ca760914e2676cf2473bd3d8ac46cdb3f897d814d55ab1072e7" Nov 29 08:36:34 crc kubenswrapper[4660]: I1129 08:36:34.369043 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nns6t" Nov 29 08:36:34 crc kubenswrapper[4660]: I1129 08:36:34.404810 4660 scope.go:117] "RemoveContainer" containerID="dd339e9094347cf96ef781790799108aa914045c20c1237111ef7354a484901f" Nov 29 08:36:34 crc kubenswrapper[4660]: I1129 08:36:34.446831 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nns6t"] Nov 29 08:36:34 crc kubenswrapper[4660]: I1129 08:36:34.471866 4660 scope.go:117] "RemoveContainer" containerID="9fa51a7318a006b1df4175ad0b6a0a6e04be21ac026f0b5867a0eb8bfa617410" Nov 29 08:36:34 crc kubenswrapper[4660]: I1129 08:36:34.478343 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nns6t"] Nov 29 08:36:35 crc kubenswrapper[4660]: I1129 08:36:35.503162 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:36:35 crc kubenswrapper[4660]: I1129 08:36:35.503535 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:36:35 crc kubenswrapper[4660]: I1129 08:36:35.710125 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a73ce820-dd51-4999-b887-05ff140430fd" path="/var/lib/kubelet/pods/a73ce820-dd51-4999-b887-05ff140430fd/volumes" Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.499980 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.500580 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.500743 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.501903 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d83827a77d012bf7e05cfefbc568ae124071355a1dc6003bb1065f52cd76371a"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.502039 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://d83827a77d012bf7e05cfefbc568ae124071355a1dc6003bb1065f52cd76371a" gracePeriod=600 Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.731193 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="d83827a77d012bf7e05cfefbc568ae124071355a1dc6003bb1065f52cd76371a" exitCode=0 Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.731250 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"d83827a77d012bf7e05cfefbc568ae124071355a1dc6003bb1065f52cd76371a"} Nov 29 08:37:05 crc kubenswrapper[4660]: I1129 08:37:05.731291 4660 scope.go:117] "RemoveContainer" containerID="464ec159620c6b75ce53531ff29c21ea83b9591c75854de2eb43032b905f0671" Nov 29 08:37:06 crc kubenswrapper[4660]: I1129 08:37:06.747811 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerStarted","Data":"b8eda2dcf5041d45cbbaaf3cd6b3978a2d719bf4e23acc5370cf9a73e988284a"} Nov 29 08:37:08 crc kubenswrapper[4660]: I1129 08:37:08.319922 4660 scope.go:117] "RemoveContainer" containerID="3dfec153723dc34679c9e250a6cf0bb4c656b0d4bcdf1ccb3377d80be6ae0202" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.611666 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rfgkf"] Nov 29 08:38:33 crc kubenswrapper[4660]: E1129 08:38:33.612852 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="registry-server" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.612875 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="registry-server" Nov 29 08:38:33 crc kubenswrapper[4660]: E1129 08:38:33.612917 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="extract-utilities" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.612929 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="extract-utilities" Nov 29 08:38:33 crc kubenswrapper[4660]: E1129 08:38:33.612945 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="extract-content" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.612956 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="extract-content" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.613243 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73ce820-dd51-4999-b887-05ff140430fd" containerName="registry-server" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.617158 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.636781 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfgkf"] Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.717244 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7gr\" (UniqueName: \"kubernetes.io/projected/9a721d89-655d-44c4-a0a1-bf1483abc66b-kube-api-access-vw7gr\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.717319 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-catalog-content\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.717823 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-utilities\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.819578 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-catalog-content\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.819932 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-utilities\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.820990 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7gr\" (UniqueName: \"kubernetes.io/projected/9a721d89-655d-44c4-a0a1-bf1483abc66b-kube-api-access-vw7gr\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.821293 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-catalog-content\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.821377 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-utilities\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.840181 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7gr\" (UniqueName: \"kubernetes.io/projected/9a721d89-655d-44c4-a0a1-bf1483abc66b-kube-api-access-vw7gr\") pod \"redhat-marketplace-rfgkf\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:33 crc kubenswrapper[4660]: I1129 08:38:33.961447 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:34 crc kubenswrapper[4660]: I1129 08:38:34.449841 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfgkf"] Nov 29 08:38:34 crc kubenswrapper[4660]: W1129 08:38:34.494513 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a721d89_655d_44c4_a0a1_bf1483abc66b.slice/crio-48d986a74ba6e0708aae7e9735609213031498fb095b9bd9b724d6ef2c61cea6 WatchSource:0}: Error finding container 48d986a74ba6e0708aae7e9735609213031498fb095b9bd9b724d6ef2c61cea6: Status 404 returned error can't find the container with id 48d986a74ba6e0708aae7e9735609213031498fb095b9bd9b724d6ef2c61cea6 Nov 29 08:38:34 crc kubenswrapper[4660]: I1129 08:38:34.674301 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfgkf" event={"ID":"9a721d89-655d-44c4-a0a1-bf1483abc66b","Type":"ContainerStarted","Data":"48d986a74ba6e0708aae7e9735609213031498fb095b9bd9b724d6ef2c61cea6"} Nov 29 08:38:35 crc kubenswrapper[4660]: I1129 08:38:35.689347 4660 generic.go:334] "Generic (PLEG): container finished" podID="9a721d89-655d-44c4-a0a1-bf1483abc66b" containerID="e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a" exitCode=0 Nov 29 08:38:35 crc kubenswrapper[4660]: I1129 08:38:35.689520 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfgkf" event={"ID":"9a721d89-655d-44c4-a0a1-bf1483abc66b","Type":"ContainerDied","Data":"e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a"} Nov 29 08:38:35 crc kubenswrapper[4660]: I1129 08:38:35.694122 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 29 08:38:37 crc kubenswrapper[4660]: I1129 08:38:37.709366 4660 generic.go:334] "Generic (PLEG): container finished" podID="9a721d89-655d-44c4-a0a1-bf1483abc66b" containerID="f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2" exitCode=0 Nov 29 08:38:37 crc kubenswrapper[4660]: I1129 08:38:37.710808 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfgkf" event={"ID":"9a721d89-655d-44c4-a0a1-bf1483abc66b","Type":"ContainerDied","Data":"f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2"} Nov 29 08:38:38 crc kubenswrapper[4660]: I1129 08:38:38.725853 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfgkf" event={"ID":"9a721d89-655d-44c4-a0a1-bf1483abc66b","Type":"ContainerStarted","Data":"dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681"} Nov 29 08:38:38 crc kubenswrapper[4660]: I1129 08:38:38.763227 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rfgkf" podStartSLOduration=3.12169617 podStartE2EDuration="5.763203507s" podCreationTimestamp="2025-11-29 08:38:33 +0000 UTC" firstStartedPulling="2025-11-29 08:38:35.693472455 +0000 UTC m=+5006.247002394" lastFinishedPulling="2025-11-29 08:38:38.334979822 +0000 UTC m=+5008.888509731" observedRunningTime="2025-11-29 08:38:38.758791538 +0000 UTC m=+5009.312321477" watchObservedRunningTime="2025-11-29 08:38:38.763203507 +0000 UTC m=+5009.316733406" Nov 29 08:38:43 crc kubenswrapper[4660]: I1129 08:38:43.962277 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:43 crc kubenswrapper[4660]: I1129 08:38:43.962841 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:44 crc kubenswrapper[4660]: I1129 08:38:44.042676 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:44 crc kubenswrapper[4660]: I1129 08:38:44.864491 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:44 crc kubenswrapper[4660]: I1129 08:38:44.920899 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfgkf"] Nov 29 08:38:46 crc kubenswrapper[4660]: I1129 08:38:46.812342 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rfgkf" podUID="9a721d89-655d-44c4-a0a1-bf1483abc66b" containerName="registry-server" containerID="cri-o://dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681" gracePeriod=2 Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.316434 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.385115 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-catalog-content\") pod \"9a721d89-655d-44c4-a0a1-bf1483abc66b\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.385186 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-utilities\") pod \"9a721d89-655d-44c4-a0a1-bf1483abc66b\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.385277 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw7gr\" (UniqueName: \"kubernetes.io/projected/9a721d89-655d-44c4-a0a1-bf1483abc66b-kube-api-access-vw7gr\") pod \"9a721d89-655d-44c4-a0a1-bf1483abc66b\" (UID: \"9a721d89-655d-44c4-a0a1-bf1483abc66b\") " Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.386268 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-utilities" (OuterVolumeSpecName: "utilities") pod "9a721d89-655d-44c4-a0a1-bf1483abc66b" (UID: "9a721d89-655d-44c4-a0a1-bf1483abc66b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.390474 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a721d89-655d-44c4-a0a1-bf1483abc66b-kube-api-access-vw7gr" (OuterVolumeSpecName: "kube-api-access-vw7gr") pod "9a721d89-655d-44c4-a0a1-bf1483abc66b" (UID: "9a721d89-655d-44c4-a0a1-bf1483abc66b"). InnerVolumeSpecName "kube-api-access-vw7gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.405322 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a721d89-655d-44c4-a0a1-bf1483abc66b" (UID: "9a721d89-655d-44c4-a0a1-bf1483abc66b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.486744 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.486775 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a721d89-655d-44c4-a0a1-bf1483abc66b-utilities\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.486787 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw7gr\" (UniqueName: \"kubernetes.io/projected/9a721d89-655d-44c4-a0a1-bf1483abc66b-kube-api-access-vw7gr\") on node \"crc\" DevicePath \"\"" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.824589 4660 generic.go:334] "Generic (PLEG): container finished" podID="9a721d89-655d-44c4-a0a1-bf1483abc66b" containerID="dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681" exitCode=0 Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.824655 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfgkf" event={"ID":"9a721d89-655d-44c4-a0a1-bf1483abc66b","Type":"ContainerDied","Data":"dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681"} Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.824724 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfgkf" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.824749 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfgkf" event={"ID":"9a721d89-655d-44c4-a0a1-bf1483abc66b","Type":"ContainerDied","Data":"48d986a74ba6e0708aae7e9735609213031498fb095b9bd9b724d6ef2c61cea6"} Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.824777 4660 scope.go:117] "RemoveContainer" containerID="dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.854604 4660 scope.go:117] "RemoveContainer" containerID="f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2" Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.880420 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfgkf"] Nov 29 08:38:47 crc kubenswrapper[4660]: I1129 08:38:47.890459 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfgkf"] Nov 29 08:38:48 crc kubenswrapper[4660]: I1129 08:38:48.227532 4660 scope.go:117] "RemoveContainer" containerID="e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a" Nov 29 08:38:48 crc kubenswrapper[4660]: I1129 08:38:48.292911 4660 scope.go:117] "RemoveContainer" containerID="dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681" Nov 29 08:38:48 crc kubenswrapper[4660]: E1129 08:38:48.301166 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681\": container with ID starting with dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681 not found: ID does not exist" containerID="dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681" Nov 29 08:38:48 crc kubenswrapper[4660]: I1129 08:38:48.301204 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681"} err="failed to get container status \"dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681\": rpc error: code = NotFound desc = could not find container \"dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681\": container with ID starting with dbbf9043b4b2dcde48a3e2c264c4a80473a5bc887521c550e9b00e33625b6681 not found: ID does not exist" Nov 29 08:38:48 crc kubenswrapper[4660]: I1129 08:38:48.301228 4660 scope.go:117] "RemoveContainer" containerID="f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2" Nov 29 08:38:48 crc kubenswrapper[4660]: E1129 08:38:48.301473 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2\": container with ID starting with f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2 not found: ID does not exist" containerID="f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2" Nov 29 08:38:48 crc kubenswrapper[4660]: I1129 08:38:48.301513 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2"} err="failed to get container status \"f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2\": rpc error: code = NotFound desc = could not find container \"f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2\": container with ID starting with f4b94e21c06db165c7d8c86d384ddd541fbfd8615b60b9484fe4b8df277a2fe2 not found: ID does not exist" Nov 29 08:38:48 crc kubenswrapper[4660]: I1129 08:38:48.301540 4660 scope.go:117] "RemoveContainer" containerID="e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a" Nov 29 08:38:48 crc kubenswrapper[4660]: E1129 08:38:48.301798 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a\": container with ID starting with e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a not found: ID does not exist" containerID="e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a" Nov 29 08:38:48 crc kubenswrapper[4660]: I1129 08:38:48.301822 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a"} err="failed to get container status \"e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a\": rpc error: code = NotFound desc = could not find container \"e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a\": container with ID starting with e701f25e17cf47e4ca69bae0548d1007b99949a09d472234e093b4f9289c214a not found: ID does not exist" Nov 29 08:38:49 crc kubenswrapper[4660]: I1129 08:38:49.708916 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a721d89-655d-44c4-a0a1-bf1483abc66b" path="/var/lib/kubelet/pods/9a721d89-655d-44c4-a0a1-bf1483abc66b/volumes" Nov 29 08:39:05 crc kubenswrapper[4660]: I1129 08:39:05.499950 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:39:05 crc kubenswrapper[4660]: I1129 08:39:05.500480 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:39:35 crc kubenswrapper[4660]: I1129 08:39:35.500086 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:39:35 crc kubenswrapper[4660]: I1129 08:39:35.500570 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.500545 4660 patch_prober.go:28] interesting pod/machine-config-daemon-bjw9w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.501237 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.501309 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.502366 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8eda2dcf5041d45cbbaaf3cd6b3978a2d719bf4e23acc5370cf9a73e988284a"} pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.502486 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerName="machine-config-daemon" containerID="cri-o://b8eda2dcf5041d45cbbaaf3cd6b3978a2d719bf4e23acc5370cf9a73e988284a" gracePeriod=600 Nov 29 08:40:05 crc kubenswrapper[4660]: E1129 08:40:05.630445 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1" Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.707477 4660 generic.go:334] "Generic (PLEG): container finished" podID="0f4a7492-b946-4db3-b301-0b860ed7cce1" containerID="b8eda2dcf5041d45cbbaaf3cd6b3978a2d719bf4e23acc5370cf9a73e988284a" exitCode=0 Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.707501 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" event={"ID":"0f4a7492-b946-4db3-b301-0b860ed7cce1","Type":"ContainerDied","Data":"b8eda2dcf5041d45cbbaaf3cd6b3978a2d719bf4e23acc5370cf9a73e988284a"} Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.707586 4660 scope.go:117] "RemoveContainer" containerID="d83827a77d012bf7e05cfefbc568ae124071355a1dc6003bb1065f52cd76371a" Nov 29 08:40:05 crc kubenswrapper[4660]: I1129 08:40:05.708497 4660 scope.go:117] "RemoveContainer" containerID="b8eda2dcf5041d45cbbaaf3cd6b3978a2d719bf4e23acc5370cf9a73e988284a" Nov 29 08:40:05 crc kubenswrapper[4660]: E1129 08:40:05.709041 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bjw9w_openshift-machine-config-operator(0f4a7492-b946-4db3-b301-0b860ed7cce1)\"" pod="openshift-machine-config-operator/machine-config-daemon-bjw9w" podUID="0f4a7492-b946-4db3-b301-0b860ed7cce1"